00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2409 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3674 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.211 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.212 The recommended git tool is: git 00:00:00.212 using credential 00000000-0000-0000-0000-000000000002 00:00:00.217 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.253 Fetching changes from the remote Git repository 00:00:00.255 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.289 Using shallow fetch with depth 1 00:00:00.289 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.289 > git --version # timeout=10 00:00:00.315 > git --version # 'git version 2.39.2' 00:00:00.316 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.330 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.330 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.769 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.781 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.793 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.793 > git config core.sparsecheckout # timeout=10 00:00:08.806 > git read-tree -mu HEAD # timeout=10 00:00:08.822 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.844 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.844 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.958 [Pipeline] Start of Pipeline 00:00:08.973 [Pipeline] library 00:00:08.975 Loading library shm_lib@master 00:00:08.975 Library shm_lib@master is cached. Copying from home. 00:00:08.989 [Pipeline] node 00:00:09.000 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:09.002 [Pipeline] { 00:00:09.013 [Pipeline] catchError 00:00:09.015 [Pipeline] { 00:00:09.028 [Pipeline] wrap 00:00:09.037 [Pipeline] { 00:00:09.047 [Pipeline] stage 00:00:09.049 [Pipeline] { (Prologue) 00:00:09.069 [Pipeline] echo 00:00:09.071 Node: VM-host-SM38 00:00:09.077 [Pipeline] cleanWs 00:00:09.088 [WS-CLEANUP] Deleting project workspace... 00:00:09.088 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.096 [WS-CLEANUP] done 00:00:09.296 [Pipeline] setCustomBuildProperty 00:00:09.391 [Pipeline] httpRequest 00:00:09.768 [Pipeline] echo 00:00:09.770 Sorcerer 10.211.164.20 is alive 00:00:09.783 [Pipeline] retry 00:00:09.786 [Pipeline] { 00:00:09.803 [Pipeline] httpRequest 00:00:09.808 HttpMethod: GET 00:00:09.809 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.809 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.825 Response Code: HTTP/1.1 200 OK 00:00:09.826 Success: Status code 200 is in the accepted range: 200,404 00:00:09.826 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.579 [Pipeline] } 00:00:12.596 [Pipeline] // retry 00:00:12.604 [Pipeline] sh 00:00:12.891 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.909 [Pipeline] httpRequest 00:00:13.261 [Pipeline] echo 00:00:13.263 Sorcerer 10.211.164.20 is alive 00:00:13.272 [Pipeline] retry 00:00:13.274 [Pipeline] { 00:00:13.291 [Pipeline] httpRequest 00:00:13.297 HttpMethod: GET 00:00:13.298 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:13.298 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:13.317 Response Code: HTTP/1.1 200 OK 00:00:13.317 Success: Status code 200 is in the accepted range: 200,404 00:00:13.318 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:58.005 [Pipeline] } 00:00:58.022 [Pipeline] // retry 00:00:58.030 [Pipeline] sh 00:00:58.315 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:01.636 [Pipeline] sh 00:01:01.926 + git -C spdk log --oneline -n5 00:01:01.926 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:01.926 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:01.926 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:01.926 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:01.926 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:01.975 [Pipeline] withCredentials 00:01:01.985 > git --version # timeout=10 00:01:01.997 > git --version # 'git version 2.39.2' 00:01:02.013 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:02.014 [Pipeline] { 00:01:02.021 [Pipeline] retry 00:01:02.023 [Pipeline] { 00:01:02.036 [Pipeline] sh 00:01:02.316 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:02.896 [Pipeline] } 00:01:02.914 [Pipeline] // retry 00:01:02.920 [Pipeline] } 00:01:02.936 [Pipeline] // withCredentials 00:01:02.946 [Pipeline] httpRequest 00:01:03.324 [Pipeline] echo 00:01:03.326 Sorcerer 10.211.164.20 is alive 00:01:03.336 [Pipeline] retry 00:01:03.338 [Pipeline] { 00:01:03.352 [Pipeline] httpRequest 00:01:03.358 HttpMethod: GET 00:01:03.359 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:03.359 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:03.376 Response Code: HTTP/1.1 200 OK 00:01:03.377 Success: Status code 200 is in the accepted range: 200,404 00:01:03.377 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.474 [Pipeline] } 00:01:18.497 [Pipeline] // retry 00:01:18.506 [Pipeline] sh 00:01:18.800 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:20.203 [Pipeline] sh 00:01:20.493 + git -C dpdk log --oneline -n5 00:01:20.493 caf0f5d395 version: 22.11.4 00:01:20.493 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:20.493 dc9c799c7d vhost: fix missing spinlock unlock 00:01:20.494 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:20.494 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:20.517 [Pipeline] writeFile 00:01:20.535 [Pipeline] sh 00:01:20.821 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:20.836 [Pipeline] sh 00:01:21.127 + cat autorun-spdk.conf 00:01:21.127 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.127 SPDK_TEST_NVME=1 00:01:21.127 SPDK_TEST_FTL=1 00:01:21.127 SPDK_TEST_ISAL=1 00:01:21.127 SPDK_RUN_ASAN=1 00:01:21.127 SPDK_RUN_UBSAN=1 00:01:21.127 SPDK_TEST_XNVME=1 00:01:21.127 SPDK_TEST_NVME_FDP=1 00:01:21.127 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:21.127 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:21.127 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.138 RUN_NIGHTLY=1 00:01:21.140 [Pipeline] } 00:01:21.154 [Pipeline] // stage 00:01:21.171 [Pipeline] stage 00:01:21.174 [Pipeline] { (Run VM) 00:01:21.189 [Pipeline] sh 00:01:21.475 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:21.476 + echo 'Start stage prepare_nvme.sh' 00:01:21.476 Start stage prepare_nvme.sh 00:01:21.476 + [[ -n 2 ]] 00:01:21.476 + disk_prefix=ex2 00:01:21.476 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:21.476 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:21.476 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:21.476 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.476 ++ SPDK_TEST_NVME=1 00:01:21.476 ++ SPDK_TEST_FTL=1 00:01:21.476 ++ SPDK_TEST_ISAL=1 00:01:21.476 ++ SPDK_RUN_ASAN=1 00:01:21.476 ++ SPDK_RUN_UBSAN=1 00:01:21.476 ++ SPDK_TEST_XNVME=1 00:01:21.476 ++ SPDK_TEST_NVME_FDP=1 00:01:21.476 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:21.476 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:21.476 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.476 ++ RUN_NIGHTLY=1 00:01:21.476 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:21.476 + nvme_files=() 00:01:21.476 + declare -A nvme_files 00:01:21.476 + backend_dir=/var/lib/libvirt/images/backends 00:01:21.476 + nvme_files['nvme.img']=5G 00:01:21.476 + nvme_files['nvme-cmb.img']=5G 00:01:21.476 + nvme_files['nvme-multi0.img']=4G 00:01:21.476 + nvme_files['nvme-multi1.img']=4G 00:01:21.476 + nvme_files['nvme-multi2.img']=4G 00:01:21.476 + nvme_files['nvme-openstack.img']=8G 00:01:21.476 + nvme_files['nvme-zns.img']=5G 00:01:21.476 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:21.476 + (( SPDK_TEST_FTL == 1 )) 00:01:21.476 + nvme_files["nvme-ftl.img"]=6G 00:01:21.476 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:21.476 + nvme_files["nvme-fdp.img"]=1G 00:01:21.476 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:21.476 + for nvme in "${!nvme_files[@]}" 00:01:21.476 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:21.476 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.476 + for nvme in "${!nvme_files[@]}" 00:01:21.476 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:22.048 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:22.048 + for nvme in "${!nvme_files[@]}" 00:01:22.048 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:22.048 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.048 + for nvme in "${!nvme_files[@]}" 00:01:22.048 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:22.048 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:22.048 + for nvme in "${!nvme_files[@]}" 00:01:22.048 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:23.010 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.010 + for nvme in "${!nvme_files[@]}" 00:01:23.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:23.010 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.010 + for nvme in "${!nvme_files[@]}" 00:01:23.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:23.010 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.010 + for nvme in "${!nvme_files[@]}" 00:01:23.010 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:23.299 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:23.299 + for nvme in "${!nvme_files[@]}" 00:01:23.299 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:23.871 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.871 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:23.871 + echo 'End stage prepare_nvme.sh' 00:01:23.871 End stage prepare_nvme.sh 00:01:23.886 [Pipeline] sh 00:01:24.178 + DISTRO=fedora39 00:01:24.179 + CPUS=10 00:01:24.179 + RAM=12288 00:01:24.179 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:24.179 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:24.179 00:01:24.179 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:24.179 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:24.179 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:24.179 HELP=0 00:01:24.179 DRY_RUN=0 00:01:24.179 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:24.179 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:24.179 NVME_AUTO_CREATE=0 00:01:24.179 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:24.179 NVME_CMB=,,,, 00:01:24.179 NVME_PMR=,,,, 00:01:24.179 NVME_ZNS=,,,, 00:01:24.179 NVME_MS=true,,,, 00:01:24.179 NVME_FDP=,,,on, 00:01:24.179 SPDK_VAGRANT_DISTRO=fedora39 00:01:24.179 SPDK_VAGRANT_VMCPU=10 00:01:24.179 SPDK_VAGRANT_VMRAM=12288 00:01:24.179 SPDK_VAGRANT_PROVIDER=libvirt 00:01:24.179 SPDK_VAGRANT_HTTP_PROXY= 00:01:24.179 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:24.179 SPDK_OPENSTACK_NETWORK=0 00:01:24.179 VAGRANT_PACKAGE_BOX=0 00:01:24.179 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:24.179 FORCE_DISTRO=true 00:01:24.179 VAGRANT_BOX_VERSION= 00:01:24.179 EXTRA_VAGRANTFILES= 00:01:24.179 NIC_MODEL=e1000 00:01:24.179 00:01:24.179 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:24.179 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:26.726 Bringing machine 'default' up with 'libvirt' provider... 00:01:26.989 ==> default: Creating image (snapshot of base box volume). 00:01:27.252 ==> default: Creating domain with the following settings... 00:01:27.252 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732769456_8c245ca53cbce7044bc3 00:01:27.252 ==> default: -- Domain type: kvm 00:01:27.252 ==> default: -- Cpus: 10 00:01:27.252 ==> default: -- Feature: acpi 00:01:27.252 ==> default: -- Feature: apic 00:01:27.252 ==> default: -- Feature: pae 00:01:27.252 ==> default: -- Memory: 12288M 00:01:27.252 ==> default: -- Memory Backing: hugepages: 00:01:27.252 ==> default: -- Management MAC: 00:01:27.252 ==> default: -- Loader: 00:01:27.252 ==> default: -- Nvram: 00:01:27.252 ==> default: -- Base box: spdk/fedora39 00:01:27.252 ==> default: -- Storage pool: default 00:01:27.252 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732769456_8c245ca53cbce7044bc3.img (20G) 00:01:27.252 ==> default: -- Volume Cache: default 00:01:27.252 ==> default: -- Kernel: 00:01:27.252 ==> default: -- Initrd: 00:01:27.252 ==> default: -- Graphics Type: vnc 00:01:27.252 ==> default: -- Graphics Port: -1 00:01:27.252 ==> default: -- Graphics IP: 127.0.0.1 00:01:27.252 ==> default: -- Graphics Password: Not defined 00:01:27.252 ==> default: -- Video Type: cirrus 00:01:27.252 ==> default: -- Video VRAM: 9216 00:01:27.252 ==> default: -- Sound Type: 00:01:27.252 ==> default: -- Keymap: en-us 00:01:27.252 ==> default: -- TPM Path: 00:01:27.252 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:27.252 ==> default: -- Command line args: 00:01:27.252 ==> default: -> value=-device, 00:01:27.252 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:27.252 ==> default: -> value=-drive, 00:01:27.252 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:27.252 ==> default: -> value=-device, 00:01:27.252 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:27.252 ==> default: -> value=-device, 00:01:27.252 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:27.252 ==> default: -> value=-drive, 00:01:27.252 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:27.252 ==> default: -> value=-device, 00:01:27.252 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.252 ==> default: -> value=-device, 00:01:27.252 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:27.252 ==> default: -> value=-drive, 00:01:27.252 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:27.252 ==> default: -> value=-device, 00:01:27.252 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.252 ==> default: -> value=-drive, 00:01:27.252 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:27.252 ==> default: -> value=-device, 00:01:27.252 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.252 ==> default: -> value=-drive, 00:01:27.252 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:27.252 ==> default: -> value=-device, 00:01:27.252 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.252 ==> default: -> value=-device, 00:01:27.252 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:27.252 ==> default: -> value=-device, 00:01:27.252 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:27.252 ==> default: -> value=-drive, 00:01:27.252 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:27.252 ==> default: -> value=-device, 00:01:27.252 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.513 ==> default: Creating shared folders metadata... 00:01:27.513 ==> default: Starting domain. 00:01:29.432 ==> default: Waiting for domain to get an IP address... 00:01:47.562 ==> default: Waiting for SSH to become available... 00:01:47.562 ==> default: Configuring and enabling network interfaces... 00:01:50.868 default: SSH address: 192.168.121.129:22 00:01:50.868 default: SSH username: vagrant 00:01:50.868 default: SSH auth method: private key 00:01:52.786 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:00.933 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:07.520 ==> default: Mounting SSHFS shared folder... 00:02:08.462 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:08.462 ==> default: Checking Mount.. 00:02:09.847 ==> default: Folder Successfully Mounted! 00:02:09.847 00:02:09.847 SUCCESS! 00:02:09.847 00:02:09.847 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:09.847 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:09.847 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:09.847 00:02:09.857 [Pipeline] } 00:02:09.872 [Pipeline] // stage 00:02:09.881 [Pipeline] dir 00:02:09.882 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:09.883 [Pipeline] { 00:02:09.896 [Pipeline] catchError 00:02:09.897 [Pipeline] { 00:02:09.909 [Pipeline] sh 00:02:10.240 + vagrant ssh-config --host vagrant 00:02:10.240 + sed -ne '/^Host/,$p' 00:02:10.240 + tee ssh_conf 00:02:12.790 Host vagrant 00:02:12.790 HostName 192.168.121.129 00:02:12.790 User vagrant 00:02:12.790 Port 22 00:02:12.790 UserKnownHostsFile /dev/null 00:02:12.790 StrictHostKeyChecking no 00:02:12.790 PasswordAuthentication no 00:02:12.790 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:12.790 IdentitiesOnly yes 00:02:12.790 LogLevel FATAL 00:02:12.790 ForwardAgent yes 00:02:12.790 ForwardX11 yes 00:02:12.790 00:02:12.807 [Pipeline] withEnv 00:02:12.811 [Pipeline] { 00:02:12.826 [Pipeline] sh 00:02:13.109 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:13.109 source /etc/os-release 00:02:13.109 [[ -e /image.version ]] && img=$(< /image.version) 00:02:13.109 # Minimal, systemd-like check. 00:02:13.109 if [[ -e /.dockerenv ]]; then 00:02:13.109 # Clear garbage from the node'\''s name: 00:02:13.109 # agt-er_autotest_547-896 -> autotest_547-896 00:02:13.109 # $HOSTNAME is the actual container id 00:02:13.109 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:13.109 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:13.109 # We can assume this is a mount from a host where container is running, 00:02:13.109 # so fetch its hostname to easily identify the target swarm worker. 00:02:13.109 container="$(< /etc/hostname) ($agent)" 00:02:13.109 else 00:02:13.109 # Fallback 00:02:13.109 container=$agent 00:02:13.109 fi 00:02:13.109 fi 00:02:13.109 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:13.109 ' 00:02:13.382 [Pipeline] } 00:02:13.402 [Pipeline] // withEnv 00:02:13.412 [Pipeline] setCustomBuildProperty 00:02:13.428 [Pipeline] stage 00:02:13.431 [Pipeline] { (Tests) 00:02:13.451 [Pipeline] sh 00:02:13.741 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:14.019 [Pipeline] sh 00:02:14.307 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:14.588 [Pipeline] timeout 00:02:14.588 Timeout set to expire in 50 min 00:02:14.590 [Pipeline] { 00:02:14.607 [Pipeline] sh 00:02:14.897 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:15.469 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:15.485 [Pipeline] sh 00:02:15.773 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:16.047 [Pipeline] sh 00:02:16.330 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:16.612 [Pipeline] sh 00:02:16.898 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:17.159 ++ readlink -f spdk_repo 00:02:17.159 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:17.159 + [[ -n /home/vagrant/spdk_repo ]] 00:02:17.159 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:17.159 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:17.159 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:17.159 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:17.159 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:17.159 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:17.159 + cd /home/vagrant/spdk_repo 00:02:17.159 + source /etc/os-release 00:02:17.159 ++ NAME='Fedora Linux' 00:02:17.159 ++ VERSION='39 (Cloud Edition)' 00:02:17.159 ++ ID=fedora 00:02:17.159 ++ VERSION_ID=39 00:02:17.159 ++ VERSION_CODENAME= 00:02:17.159 ++ PLATFORM_ID=platform:f39 00:02:17.159 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:17.159 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:17.159 ++ LOGO=fedora-logo-icon 00:02:17.159 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:17.159 ++ HOME_URL=https://fedoraproject.org/ 00:02:17.159 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:17.159 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:17.159 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:17.159 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:17.159 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:17.159 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:17.159 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:17.159 ++ SUPPORT_END=2024-11-12 00:02:17.159 ++ VARIANT='Cloud Edition' 00:02:17.159 ++ VARIANT_ID=cloud 00:02:17.159 + uname -a 00:02:17.159 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:17.159 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:17.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:17.680 Hugepages 00:02:17.680 node hugesize free / total 00:02:17.680 node0 1048576kB 0 / 0 00:02:17.680 node0 2048kB 0 / 0 00:02:17.680 00:02:17.681 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:17.681 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:17.681 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:17.681 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:17.976 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:17.976 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:17.976 + rm -f /tmp/spdk-ld-path 00:02:17.976 + source autorun-spdk.conf 00:02:17.976 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.976 ++ SPDK_TEST_NVME=1 00:02:17.976 ++ SPDK_TEST_FTL=1 00:02:17.976 ++ SPDK_TEST_ISAL=1 00:02:17.976 ++ SPDK_RUN_ASAN=1 00:02:17.976 ++ SPDK_RUN_UBSAN=1 00:02:17.976 ++ SPDK_TEST_XNVME=1 00:02:17.976 ++ SPDK_TEST_NVME_FDP=1 00:02:17.976 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:17.976 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:17.976 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:17.976 ++ RUN_NIGHTLY=1 00:02:17.976 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:17.976 + [[ -n '' ]] 00:02:17.976 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:17.976 + for M in /var/spdk/build-*-manifest.txt 00:02:17.976 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:17.976 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:17.976 + for M in /var/spdk/build-*-manifest.txt 00:02:17.976 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:17.976 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:17.976 + for M in /var/spdk/build-*-manifest.txt 00:02:17.976 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:17.976 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:17.976 ++ uname 00:02:17.976 + [[ Linux == \L\i\n\u\x ]] 00:02:17.976 + sudo dmesg -T 00:02:17.976 + sudo dmesg --clear 00:02:17.976 + dmesg_pid=5765 00:02:17.976 + [[ Fedora Linux == FreeBSD ]] 00:02:17.976 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.976 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.976 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:17.976 + [[ -x /usr/src/fio-static/fio ]] 00:02:17.976 + sudo dmesg -Tw 00:02:17.976 + export FIO_BIN=/usr/src/fio-static/fio 00:02:17.976 + FIO_BIN=/usr/src/fio-static/fio 00:02:17.977 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:17.977 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:17.977 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:17.977 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.977 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.977 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:17.977 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.977 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.977 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:17.977 04:51:47 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:17.977 04:51:47 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:17.977 04:51:47 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.977 04:51:47 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:17.977 04:51:47 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:17.977 04:51:47 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:17.977 04:51:47 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:17.977 04:51:47 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:17.977 04:51:47 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:17.977 04:51:47 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:17.977 04:51:47 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:17.977 04:51:47 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:17.977 04:51:47 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:17.977 04:51:47 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:17.977 04:51:47 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:17.977 04:51:47 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:17.977 04:51:47 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:17.977 04:51:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:17.977 04:51:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:17.977 04:51:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:17.977 04:51:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.977 04:51:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.977 04:51:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.977 04:51:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.977 04:51:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.977 04:51:47 -- paths/export.sh@5 -- $ export PATH 00:02:17.977 04:51:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.977 04:51:47 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:17.977 04:51:47 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:17.977 04:51:47 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732769507.XXXXXX 00:02:17.977 04:51:47 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732769507.L131Gj 00:02:17.977 04:51:47 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:17.977 04:51:47 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:17.977 04:51:47 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:17.977 04:51:47 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:18.239 04:51:47 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:18.239 04:51:47 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:18.239 04:51:47 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:18.239 04:51:47 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:18.239 04:51:47 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.239 04:51:47 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:18.239 04:51:47 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:18.239 04:51:47 -- pm/common@17 -- $ local monitor 00:02:18.239 04:51:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.239 04:51:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.239 04:51:47 -- pm/common@25 -- $ sleep 1 00:02:18.239 04:51:47 -- pm/common@21 -- $ date +%s 00:02:18.239 04:51:47 -- pm/common@21 -- $ date +%s 00:02:18.239 04:51:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732769507 00:02:18.239 04:51:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732769507 00:02:18.239 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732769507_collect-cpu-load.pm.log 00:02:18.239 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732769507_collect-vmstat.pm.log 00:02:19.183 04:51:48 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:19.183 04:51:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:19.183 04:51:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:19.183 04:51:48 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:19.183 04:51:48 -- spdk/autobuild.sh@16 -- $ date -u 00:02:19.183 Thu Nov 28 04:51:48 AM UTC 2024 00:02:19.183 04:51:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:19.183 v25.01-pre-276-g35cd3e84d 00:02:19.183 04:51:48 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:19.183 04:51:48 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:19.183 04:51:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:19.183 04:51:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.183 04:51:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.183 ************************************ 00:02:19.183 START TEST asan 00:02:19.183 ************************************ 00:02:19.183 using asan 00:02:19.183 04:51:48 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:19.183 00:02:19.183 real 0m0.000s 00:02:19.183 user 0m0.000s 00:02:19.183 sys 0m0.000s 00:02:19.183 04:51:48 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:19.183 ************************************ 00:02:19.183 END TEST asan 00:02:19.183 ************************************ 00:02:19.183 04:51:48 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:19.183 04:51:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:19.183 04:51:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:19.183 04:51:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:19.183 04:51:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.183 04:51:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.183 ************************************ 00:02:19.183 START TEST ubsan 00:02:19.183 ************************************ 00:02:19.183 using ubsan 00:02:19.183 04:51:48 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:19.183 00:02:19.183 real 0m0.000s 00:02:19.183 user 0m0.000s 00:02:19.183 sys 0m0.000s 00:02:19.183 04:51:48 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:19.183 ************************************ 00:02:19.183 END TEST ubsan 00:02:19.183 ************************************ 00:02:19.183 04:51:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:19.183 04:51:48 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:19.183 04:51:48 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:19.183 04:51:48 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:19.183 04:51:48 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:19.183 04:51:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.183 04:51:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.183 ************************************ 00:02:19.183 START TEST build_native_dpdk 00:02:19.183 ************************************ 00:02:19.183 04:51:48 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:19.183 04:51:48 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:19.443 caf0f5d395 version: 22.11.4 00:02:19.443 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:19.443 dc9c799c7d vhost: fix missing spinlock unlock 00:02:19.443 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:19.443 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:19.443 04:51:48 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:19.443 04:51:48 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:19.443 04:51:48 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:19.444 patching file config/rte_config.h 00:02:19.444 Hunk #1 succeeded at 60 (offset 1 line). 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:19.444 patching file lib/pcapng/rte_pcapng.c 00:02:19.444 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:19.444 04:51:48 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:19.444 04:51:48 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:23.651 The Meson build system 00:02:23.651 Version: 1.5.0 00:02:23.651 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:23.651 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:23.651 Build type: native build 00:02:23.651 Program cat found: YES (/usr/bin/cat) 00:02:23.651 Project name: DPDK 00:02:23.651 Project version: 22.11.4 00:02:23.651 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:23.651 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:23.651 Host machine cpu family: x86_64 00:02:23.651 Host machine cpu: x86_64 00:02:23.651 Message: ## Building in Developer Mode ## 00:02:23.651 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.651 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:23.651 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.651 Program objdump found: YES (/usr/bin/objdump) 00:02:23.651 Program python3 found: YES (/usr/bin/python3) 00:02:23.651 Program cat found: YES (/usr/bin/cat) 00:02:23.651 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:23.651 Checking for size of "void *" : 8 00:02:23.651 Checking for size of "void *" : 8 (cached) 00:02:23.651 Library m found: YES 00:02:23.651 Library numa found: YES 00:02:23.651 Has header "numaif.h" : YES 00:02:23.651 Library fdt found: NO 00:02:23.651 Library execinfo found: NO 00:02:23.651 Has header "execinfo.h" : YES 00:02:23.651 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:23.651 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.651 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.651 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.651 Run-time dependency openssl found: YES 3.1.1 00:02:23.651 Run-time dependency libpcap found: YES 1.10.4 00:02:23.651 Has header "pcap.h" with dependency libpcap: YES 00:02:23.651 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.651 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.651 Compiler for C supports arguments -Wformat: YES 00:02:23.651 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.651 Compiler for C supports arguments -Wformat-security: NO 00:02:23.651 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.651 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.651 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.651 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.651 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.651 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.651 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.651 Compiler for C supports arguments -Wundef: YES 00:02:23.651 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.651 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.651 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.651 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.651 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.652 Compiler for C supports arguments -mavx512f: YES 00:02:23.652 Checking if "AVX512 checking" compiles: YES 00:02:23.652 Fetching value of define "__SSE4_2__" : 1 00:02:23.652 Fetching value of define "__AES__" : 1 00:02:23.652 Fetching value of define "__AVX__" : 1 00:02:23.652 Fetching value of define "__AVX2__" : 1 00:02:23.652 Fetching value of define "__AVX512BW__" : 1 00:02:23.652 Fetching value of define "__AVX512CD__" : 1 00:02:23.652 Fetching value of define "__AVX512DQ__" : 1 00:02:23.652 Fetching value of define "__AVX512F__" : 1 00:02:23.652 Fetching value of define "__AVX512VL__" : 1 00:02:23.652 Fetching value of define "__PCLMUL__" : 1 00:02:23.652 Fetching value of define "__RDRND__" : 1 00:02:23.652 Fetching value of define "__RDSEED__" : 1 00:02:23.652 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:23.652 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.652 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.652 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.652 Checking for function "getentropy" : YES 00:02:23.652 Message: lib/eal: Defining dependency "eal" 00:02:23.652 Message: lib/ring: Defining dependency "ring" 00:02:23.652 Message: lib/rcu: Defining dependency "rcu" 00:02:23.652 Message: lib/mempool: Defining dependency "mempool" 00:02:23.652 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.652 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.652 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.652 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.652 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.652 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.652 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:23.652 Compiler for C supports arguments -mpclmul: YES 00:02:23.652 Compiler for C supports arguments -maes: YES 00:02:23.652 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.652 Compiler for C supports arguments -mavx512bw: YES 00:02:23.652 Compiler for C supports arguments -mavx512dq: YES 00:02:23.652 Compiler for C supports arguments -mavx512vl: YES 00:02:23.652 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.652 Compiler for C supports arguments -mavx2: YES 00:02:23.652 Compiler for C supports arguments -mavx: YES 00:02:23.652 Message: lib/net: Defining dependency "net" 00:02:23.652 Message: lib/meter: Defining dependency "meter" 00:02:23.652 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.652 Message: lib/pci: Defining dependency "pci" 00:02:23.652 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.652 Message: lib/metrics: Defining dependency "metrics" 00:02:23.652 Message: lib/hash: Defining dependency "hash" 00:02:23.652 Message: lib/timer: Defining dependency "timer" 00:02:23.652 Fetching value of define "__AVX2__" : 1 (cached) 00:02:23.652 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.652 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.652 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:23.652 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.652 Message: lib/acl: Defining dependency "acl" 00:02:23.652 Message: lib/bbdev: Defining dependency "bbdev" 00:02:23.652 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:23.652 Run-time dependency libelf found: YES 0.191 00:02:23.652 Message: lib/bpf: Defining dependency "bpf" 00:02:23.652 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:23.652 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.652 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.652 Message: lib/distributor: Defining dependency "distributor" 00:02:23.652 Message: lib/efd: Defining dependency "efd" 00:02:23.652 Message: lib/eventdev: Defining dependency "eventdev" 00:02:23.652 Message: lib/gpudev: Defining dependency "gpudev" 00:02:23.652 Message: lib/gro: Defining dependency "gro" 00:02:23.652 Message: lib/gso: Defining dependency "gso" 00:02:23.652 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:23.652 Message: lib/jobstats: Defining dependency "jobstats" 00:02:23.652 Message: lib/latencystats: Defining dependency "latencystats" 00:02:23.652 Message: lib/lpm: Defining dependency "lpm" 00:02:23.652 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.652 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.652 Fetching value of define "__AVX512IFMA__" : 1 00:02:23.652 Message: lib/member: Defining dependency "member" 00:02:23.652 Message: lib/pcapng: Defining dependency "pcapng" 00:02:23.652 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.652 Message: lib/power: Defining dependency "power" 00:02:23.652 Message: lib/rawdev: Defining dependency "rawdev" 00:02:23.652 Message: lib/regexdev: Defining dependency "regexdev" 00:02:23.652 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.652 Message: lib/rib: Defining dependency "rib" 00:02:23.652 Message: lib/reorder: Defining dependency "reorder" 00:02:23.652 Message: lib/sched: Defining dependency "sched" 00:02:23.652 Message: lib/security: Defining dependency "security" 00:02:23.652 Message: lib/stack: Defining dependency "stack" 00:02:23.652 Has header "linux/userfaultfd.h" : YES 00:02:23.652 Message: lib/vhost: Defining dependency "vhost" 00:02:23.652 Message: lib/ipsec: Defining dependency "ipsec" 00:02:23.652 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.652 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.652 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.652 Message: lib/fib: Defining dependency "fib" 00:02:23.652 Message: lib/port: Defining dependency "port" 00:02:23.652 Message: lib/pdump: Defining dependency "pdump" 00:02:23.652 Message: lib/table: Defining dependency "table" 00:02:23.652 Message: lib/pipeline: Defining dependency "pipeline" 00:02:23.652 Message: lib/graph: Defining dependency "graph" 00:02:23.652 Message: lib/node: Defining dependency "node" 00:02:23.652 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.652 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.652 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.652 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.652 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:23.652 Compiler for C supports arguments -Wno-unused-value: YES 00:02:23.652 Compiler for C supports arguments -Wno-format: YES 00:02:23.652 Compiler for C supports arguments -Wno-format-security: YES 00:02:23.652 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:23.652 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:23.652 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:23.652 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:25.042 Fetching value of define "__AVX2__" : 1 (cached) 00:02:25.042 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:25.042 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:25.042 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:25.042 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:25.042 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:25.042 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:25.042 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:25.042 Configuring doxy-api.conf using configuration 00:02:25.042 Program sphinx-build found: NO 00:02:25.042 Configuring rte_build_config.h using configuration 00:02:25.042 Message: 00:02:25.042 ================= 00:02:25.042 Applications Enabled 00:02:25.042 ================= 00:02:25.042 00:02:25.042 apps: 00:02:25.042 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:25.042 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:25.042 test-security-perf, 00:02:25.042 00:02:25.042 Message: 00:02:25.042 ================= 00:02:25.042 Libraries Enabled 00:02:25.042 ================= 00:02:25.042 00:02:25.042 libs: 00:02:25.042 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:25.042 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:25.042 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:25.042 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:25.042 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:25.042 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:25.042 table, pipeline, graph, node, 00:02:25.042 00:02:25.042 Message: 00:02:25.042 =============== 00:02:25.042 Drivers Enabled 00:02:25.042 =============== 00:02:25.042 00:02:25.042 common: 00:02:25.042 00:02:25.042 bus: 00:02:25.042 pci, vdev, 00:02:25.042 mempool: 00:02:25.042 ring, 00:02:25.042 dma: 00:02:25.042 00:02:25.042 net: 00:02:25.042 i40e, 00:02:25.042 raw: 00:02:25.042 00:02:25.042 crypto: 00:02:25.042 00:02:25.042 compress: 00:02:25.042 00:02:25.042 regex: 00:02:25.042 00:02:25.042 vdpa: 00:02:25.042 00:02:25.042 event: 00:02:25.042 00:02:25.042 baseband: 00:02:25.042 00:02:25.042 gpu: 00:02:25.042 00:02:25.042 00:02:25.042 Message: 00:02:25.042 ================= 00:02:25.042 Content Skipped 00:02:25.042 ================= 00:02:25.042 00:02:25.042 apps: 00:02:25.042 00:02:25.042 libs: 00:02:25.042 kni: explicitly disabled via build config (deprecated lib) 00:02:25.042 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:25.042 00:02:25.042 drivers: 00:02:25.042 common/cpt: not in enabled drivers build config 00:02:25.042 common/dpaax: not in enabled drivers build config 00:02:25.042 common/iavf: not in enabled drivers build config 00:02:25.042 common/idpf: not in enabled drivers build config 00:02:25.042 common/mvep: not in enabled drivers build config 00:02:25.042 common/octeontx: not in enabled drivers build config 00:02:25.042 bus/auxiliary: not in enabled drivers build config 00:02:25.042 bus/dpaa: not in enabled drivers build config 00:02:25.042 bus/fslmc: not in enabled drivers build config 00:02:25.042 bus/ifpga: not in enabled drivers build config 00:02:25.042 bus/vmbus: not in enabled drivers build config 00:02:25.042 common/cnxk: not in enabled drivers build config 00:02:25.042 common/mlx5: not in enabled drivers build config 00:02:25.042 common/qat: not in enabled drivers build config 00:02:25.042 common/sfc_efx: not in enabled drivers build config 00:02:25.042 mempool/bucket: not in enabled drivers build config 00:02:25.042 mempool/cnxk: not in enabled drivers build config 00:02:25.042 mempool/dpaa: not in enabled drivers build config 00:02:25.042 mempool/dpaa2: not in enabled drivers build config 00:02:25.042 mempool/octeontx: not in enabled drivers build config 00:02:25.042 mempool/stack: not in enabled drivers build config 00:02:25.042 dma/cnxk: not in enabled drivers build config 00:02:25.042 dma/dpaa: not in enabled drivers build config 00:02:25.042 dma/dpaa2: not in enabled drivers build config 00:02:25.042 dma/hisilicon: not in enabled drivers build config 00:02:25.042 dma/idxd: not in enabled drivers build config 00:02:25.042 dma/ioat: not in enabled drivers build config 00:02:25.042 dma/skeleton: not in enabled drivers build config 00:02:25.042 net/af_packet: not in enabled drivers build config 00:02:25.042 net/af_xdp: not in enabled drivers build config 00:02:25.042 net/ark: not in enabled drivers build config 00:02:25.042 net/atlantic: not in enabled drivers build config 00:02:25.042 net/avp: not in enabled drivers build config 00:02:25.042 net/axgbe: not in enabled drivers build config 00:02:25.042 net/bnx2x: not in enabled drivers build config 00:02:25.042 net/bnxt: not in enabled drivers build config 00:02:25.042 net/bonding: not in enabled drivers build config 00:02:25.042 net/cnxk: not in enabled drivers build config 00:02:25.042 net/cxgbe: not in enabled drivers build config 00:02:25.042 net/dpaa: not in enabled drivers build config 00:02:25.042 net/dpaa2: not in enabled drivers build config 00:02:25.042 net/e1000: not in enabled drivers build config 00:02:25.042 net/ena: not in enabled drivers build config 00:02:25.042 net/enetc: not in enabled drivers build config 00:02:25.042 net/enetfec: not in enabled drivers build config 00:02:25.042 net/enic: not in enabled drivers build config 00:02:25.042 net/failsafe: not in enabled drivers build config 00:02:25.042 net/fm10k: not in enabled drivers build config 00:02:25.042 net/gve: not in enabled drivers build config 00:02:25.042 net/hinic: not in enabled drivers build config 00:02:25.042 net/hns3: not in enabled drivers build config 00:02:25.042 net/iavf: not in enabled drivers build config 00:02:25.042 net/ice: not in enabled drivers build config 00:02:25.042 net/idpf: not in enabled drivers build config 00:02:25.042 net/igc: not in enabled drivers build config 00:02:25.042 net/ionic: not in enabled drivers build config 00:02:25.042 net/ipn3ke: not in enabled drivers build config 00:02:25.042 net/ixgbe: not in enabled drivers build config 00:02:25.042 net/kni: not in enabled drivers build config 00:02:25.042 net/liquidio: not in enabled drivers build config 00:02:25.042 net/mana: not in enabled drivers build config 00:02:25.043 net/memif: not in enabled drivers build config 00:02:25.043 net/mlx4: not in enabled drivers build config 00:02:25.043 net/mlx5: not in enabled drivers build config 00:02:25.043 net/mvneta: not in enabled drivers build config 00:02:25.043 net/mvpp2: not in enabled drivers build config 00:02:25.043 net/netvsc: not in enabled drivers build config 00:02:25.043 net/nfb: not in enabled drivers build config 00:02:25.043 net/nfp: not in enabled drivers build config 00:02:25.043 net/ngbe: not in enabled drivers build config 00:02:25.043 net/null: not in enabled drivers build config 00:02:25.043 net/octeontx: not in enabled drivers build config 00:02:25.043 net/octeon_ep: not in enabled drivers build config 00:02:25.043 net/pcap: not in enabled drivers build config 00:02:25.043 net/pfe: not in enabled drivers build config 00:02:25.043 net/qede: not in enabled drivers build config 00:02:25.043 net/ring: not in enabled drivers build config 00:02:25.043 net/sfc: not in enabled drivers build config 00:02:25.043 net/softnic: not in enabled drivers build config 00:02:25.043 net/tap: not in enabled drivers build config 00:02:25.043 net/thunderx: not in enabled drivers build config 00:02:25.043 net/txgbe: not in enabled drivers build config 00:02:25.043 net/vdev_netvsc: not in enabled drivers build config 00:02:25.043 net/vhost: not in enabled drivers build config 00:02:25.043 net/virtio: not in enabled drivers build config 00:02:25.043 net/vmxnet3: not in enabled drivers build config 00:02:25.043 raw/cnxk_bphy: not in enabled drivers build config 00:02:25.043 raw/cnxk_gpio: not in enabled drivers build config 00:02:25.043 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:25.043 raw/ifpga: not in enabled drivers build config 00:02:25.043 raw/ntb: not in enabled drivers build config 00:02:25.043 raw/skeleton: not in enabled drivers build config 00:02:25.043 crypto/armv8: not in enabled drivers build config 00:02:25.043 crypto/bcmfs: not in enabled drivers build config 00:02:25.043 crypto/caam_jr: not in enabled drivers build config 00:02:25.043 crypto/ccp: not in enabled drivers build config 00:02:25.043 crypto/cnxk: not in enabled drivers build config 00:02:25.043 crypto/dpaa_sec: not in enabled drivers build config 00:02:25.043 crypto/dpaa2_sec: not in enabled drivers build config 00:02:25.043 crypto/ipsec_mb: not in enabled drivers build config 00:02:25.043 crypto/mlx5: not in enabled drivers build config 00:02:25.043 crypto/mvsam: not in enabled drivers build config 00:02:25.043 crypto/nitrox: not in enabled drivers build config 00:02:25.043 crypto/null: not in enabled drivers build config 00:02:25.043 crypto/octeontx: not in enabled drivers build config 00:02:25.043 crypto/openssl: not in enabled drivers build config 00:02:25.043 crypto/scheduler: not in enabled drivers build config 00:02:25.043 crypto/uadk: not in enabled drivers build config 00:02:25.043 crypto/virtio: not in enabled drivers build config 00:02:25.043 compress/isal: not in enabled drivers build config 00:02:25.043 compress/mlx5: not in enabled drivers build config 00:02:25.043 compress/octeontx: not in enabled drivers build config 00:02:25.043 compress/zlib: not in enabled drivers build config 00:02:25.043 regex/mlx5: not in enabled drivers build config 00:02:25.043 regex/cn9k: not in enabled drivers build config 00:02:25.043 vdpa/ifc: not in enabled drivers build config 00:02:25.043 vdpa/mlx5: not in enabled drivers build config 00:02:25.043 vdpa/sfc: not in enabled drivers build config 00:02:25.043 event/cnxk: not in enabled drivers build config 00:02:25.043 event/dlb2: not in enabled drivers build config 00:02:25.043 event/dpaa: not in enabled drivers build config 00:02:25.043 event/dpaa2: not in enabled drivers build config 00:02:25.043 event/dsw: not in enabled drivers build config 00:02:25.043 event/opdl: not in enabled drivers build config 00:02:25.043 event/skeleton: not in enabled drivers build config 00:02:25.043 event/sw: not in enabled drivers build config 00:02:25.043 event/octeontx: not in enabled drivers build config 00:02:25.043 baseband/acc: not in enabled drivers build config 00:02:25.043 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:25.043 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:25.043 baseband/la12xx: not in enabled drivers build config 00:02:25.043 baseband/null: not in enabled drivers build config 00:02:25.043 baseband/turbo_sw: not in enabled drivers build config 00:02:25.043 gpu/cuda: not in enabled drivers build config 00:02:25.043 00:02:25.043 00:02:25.043 Build targets in project: 309 00:02:25.043 00:02:25.043 DPDK 22.11.4 00:02:25.043 00:02:25.043 User defined options 00:02:25.043 libdir : lib 00:02:25.043 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:25.043 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:25.043 c_link_args : 00:02:25.043 enable_docs : false 00:02:25.043 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:25.043 enable_kmods : false 00:02:25.043 machine : native 00:02:25.043 tests : false 00:02:25.043 00:02:25.043 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.043 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:25.043 04:51:54 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:25.043 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:25.043 [1/738] Generating lib/rte_kvargs_def with a custom command 00:02:25.043 [2/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:25.043 [3/738] Generating lib/rte_telemetry_def with a custom command 00:02:25.043 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:25.043 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:25.043 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:25.043 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:25.043 [8/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:25.043 [9/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:25.043 [10/738] Linking static target lib/librte_kvargs.a 00:02:25.043 [11/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:25.043 [12/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:25.043 [13/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:25.043 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:25.305 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:25.305 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:25.305 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:25.305 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:25.305 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:25.305 [20/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.305 [21/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:25.305 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:25.305 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:25.305 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:25.305 [25/738] Linking target lib/librte_kvargs.so.23.0 00:02:25.566 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:25.566 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:25.566 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:25.566 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:25.566 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.566 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:25.566 [32/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:25.566 [33/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:25.566 [34/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.566 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:25.566 [36/738] Linking static target lib/librte_telemetry.a 00:02:25.566 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:25.566 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.566 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:25.827 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.827 [41/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:25.827 [42/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:25.827 [43/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:25.827 [44/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:25.827 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:25.827 [46/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.827 [47/738] Linking target lib/librte_telemetry.so.23.0 00:02:26.088 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:26.088 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.088 [50/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.088 [51/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.088 [52/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.088 [53/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.088 [54/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:26.088 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.088 [56/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.088 [57/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.088 [58/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.088 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.088 [60/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.088 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:26.088 [62/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:26.088 [63/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.088 [64/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.088 [65/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:26.088 [66/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:26.345 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:26.345 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:26.345 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:26.345 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:26.345 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:26.345 [72/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:26.345 [73/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:26.345 [74/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:26.345 [75/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:26.345 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:26.345 [77/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:26.345 [78/738] Generating lib/rte_eal_def with a custom command 00:02:26.345 [79/738] Generating lib/rte_eal_mingw with a custom command 00:02:26.345 [80/738] Generating lib/rte_ring_mingw with a custom command 00:02:26.346 [81/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:26.346 [82/738] Generating lib/rte_ring_def with a custom command 00:02:26.346 [83/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:26.346 [84/738] Generating lib/rte_rcu_def with a custom command 00:02:26.346 [85/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:26.346 [86/738] Generating lib/rte_rcu_mingw with a custom command 00:02:26.603 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:26.603 [88/738] Linking static target lib/librte_ring.a 00:02:26.603 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:26.603 [90/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:26.603 [91/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:26.603 [92/738] Generating lib/rte_mempool_def with a custom command 00:02:26.603 [93/738] Generating lib/rte_mempool_mingw with a custom command 00:02:26.603 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.862 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:26.862 [96/738] Generating lib/rte_mbuf_def with a custom command 00:02:26.862 [97/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:26.862 [98/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:26.862 [99/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:26.862 [100/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:26.862 [101/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:26.862 [102/738] Linking static target lib/librte_eal.a 00:02:26.862 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:27.120 [104/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:27.120 [105/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:27.120 [106/738] Generating lib/rte_net_def with a custom command 00:02:27.120 [107/738] Generating lib/rte_net_mingw with a custom command 00:02:27.120 [108/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:27.120 [109/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:27.120 [110/738] Generating lib/rte_meter_def with a custom command 00:02:27.120 [111/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:27.120 [112/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:27.120 [113/738] Linking static target lib/librte_rcu.a 00:02:27.120 [114/738] Linking static target lib/librte_mempool.a 00:02:27.120 [115/738] Generating lib/rte_meter_mingw with a custom command 00:02:27.120 [116/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:27.120 [117/738] Linking static target lib/librte_meter.a 00:02:27.120 [118/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:27.378 [119/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:27.378 [120/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:27.378 [121/738] Linking static target lib/librte_net.a 00:02:27.378 [122/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.378 [123/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.636 [124/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.636 [125/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:27.636 [126/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:27.636 [127/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:27.636 [128/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:27.636 [129/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:27.636 [130/738] Linking static target lib/librte_mbuf.a 00:02:27.636 [131/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.895 [132/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:27.895 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:27.895 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:27.895 [135/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:28.153 [136/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.153 [137/738] Generating lib/rte_ethdev_def with a custom command 00:02:28.153 [138/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:28.153 [139/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:28.153 [140/738] Generating lib/rte_pci_def with a custom command 00:02:28.153 [141/738] Generating lib/rte_pci_mingw with a custom command 00:02:28.153 [142/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:28.153 [143/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:28.153 [144/738] Linking static target lib/librte_pci.a 00:02:28.153 [145/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:28.153 [146/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:28.153 [147/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:28.153 [148/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:28.153 [149/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.153 [150/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:28.412 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:28.412 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:28.412 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:28.412 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:28.412 [155/738] Generating lib/rte_cmdline_def with a custom command 00:02:28.412 [156/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:28.412 [157/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:28.412 [158/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:28.412 [159/738] Generating lib/rte_metrics_def with a custom command 00:02:28.412 [160/738] Generating lib/rte_metrics_mingw with a custom command 00:02:28.412 [161/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:28.412 [162/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:28.412 [163/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:28.412 [164/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:28.412 [165/738] Generating lib/rte_hash_def with a custom command 00:02:28.412 [166/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:28.412 [167/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:28.412 [168/738] Generating lib/rte_hash_mingw with a custom command 00:02:28.412 [169/738] Generating lib/rte_timer_def with a custom command 00:02:28.412 [170/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:28.412 [171/738] Generating lib/rte_timer_mingw with a custom command 00:02:28.412 [172/738] Linking static target lib/librte_cmdline.a 00:02:28.670 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:28.670 [174/738] Linking static target lib/librte_metrics.a 00:02:28.670 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:28.671 [176/738] Linking static target lib/librte_timer.a 00:02:28.929 [177/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.929 [178/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:28.929 [179/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.929 [180/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:29.186 [181/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:29.186 [182/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.186 [183/738] Generating lib/rte_acl_def with a custom command 00:02:29.186 [184/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:29.186 [185/738] Generating lib/rte_acl_mingw with a custom command 00:02:29.186 [186/738] Generating lib/rte_bbdev_def with a custom command 00:02:29.186 [187/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:29.186 [188/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:29.186 [189/738] Generating lib/rte_bitratestats_def with a custom command 00:02:29.186 [190/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:29.445 [191/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:29.445 [192/738] Linking static target lib/librte_ethdev.a 00:02:29.703 [193/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:29.703 [194/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:29.703 [195/738] Linking static target lib/librte_bitratestats.a 00:02:29.703 [196/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:29.703 [197/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:29.703 [198/738] Linking static target lib/librte_bbdev.a 00:02:29.703 [199/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.961 [200/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:29.961 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:30.219 [202/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.219 [203/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:30.219 [204/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:30.219 [205/738] Linking static target lib/librte_hash.a 00:02:30.219 [206/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:30.219 [207/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:30.477 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:30.477 [209/738] Generating lib/rte_bpf_def with a custom command 00:02:30.477 [210/738] Generating lib/rte_bpf_mingw with a custom command 00:02:30.735 [211/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:30.735 [212/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.735 [213/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:30.735 [214/738] Generating lib/rte_cfgfile_def with a custom command 00:02:30.735 [215/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:30.735 [216/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:30.735 [217/738] Generating lib/rte_compressdev_def with a custom command 00:02:30.735 [218/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:30.735 [219/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:30.735 [220/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:30.735 [221/738] Linking static target lib/librte_cfgfile.a 00:02:30.993 [222/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:30.993 [223/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:30.993 [224/738] Linking static target lib/librte_acl.a 00:02:30.993 [225/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:30.993 [226/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.993 [227/738] Generating lib/rte_cryptodev_def with a custom command 00:02:30.993 [228/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:30.993 [229/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.993 [230/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:30.993 [231/738] Linking static target lib/librte_bpf.a 00:02:30.993 [232/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.993 [233/738] Linking static target lib/librte_compressdev.a 00:02:31.251 [234/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.251 [235/738] Generating lib/rte_distributor_def with a custom command 00:02:31.251 [236/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:31.251 [237/738] Generating lib/rte_distributor_mingw with a custom command 00:02:31.251 [238/738] Generating lib/rte_efd_def with a custom command 00:02:31.251 [239/738] Generating lib/rte_efd_mingw with a custom command 00:02:31.251 [240/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.508 [241/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:31.508 [242/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:31.508 [243/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:31.508 [244/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:31.508 [245/738] Linking static target lib/librte_distributor.a 00:02:31.766 [246/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:31.766 [247/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.766 [248/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.766 [249/738] Linking target lib/librte_eal.so.23.0 00:02:31.766 [250/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.766 [251/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:31.766 [252/738] Linking target lib/librte_ring.so.23.0 00:02:32.024 [253/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:32.024 [254/738] Linking target lib/librte_rcu.so.23.0 00:02:32.024 [255/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:32.024 [256/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:32.024 [257/738] Linking target lib/librte_mempool.so.23.0 00:02:32.024 [258/738] Linking target lib/librte_meter.so.23.0 00:02:32.024 [259/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:32.024 [260/738] Linking target lib/librte_pci.so.23.0 00:02:32.024 [261/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:32.024 [262/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:32.024 [263/738] Linking target lib/librte_mbuf.so.23.0 00:02:32.282 [264/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:32.282 [265/738] Linking target lib/librte_timer.so.23.0 00:02:32.282 [266/738] Linking target lib/librte_acl.so.23.0 00:02:32.282 [267/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:32.282 [268/738] Linking target lib/librte_net.so.23.0 00:02:32.282 [269/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:32.282 [270/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:32.282 [271/738] Linking target lib/librte_bbdev.so.23.0 00:02:32.282 [272/738] Linking target lib/librte_cfgfile.so.23.0 00:02:32.282 [273/738] Linking target lib/librte_compressdev.so.23.0 00:02:32.282 [274/738] Linking static target lib/librte_efd.a 00:02:32.282 [275/738] Linking target lib/librte_distributor.so.23.0 00:02:32.282 [276/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:32.282 [277/738] Generating lib/rte_eventdev_def with a custom command 00:02:32.282 [278/738] Linking static target lib/librte_cryptodev.a 00:02:32.282 [279/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:32.282 [280/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:32.282 [281/738] Linking target lib/librte_cmdline.so.23.0 00:02:32.540 [282/738] Linking target lib/librte_hash.so.23.0 00:02:32.540 [283/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:32.540 [284/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:32.540 [285/738] Generating lib/rte_gpudev_def with a custom command 00:02:32.540 [286/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:32.540 [287/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.540 [288/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:32.540 [289/738] Linking target lib/librte_efd.so.23.0 00:02:32.540 [290/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:32.798 [291/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:32.798 [292/738] Generating lib/rte_gro_def with a custom command 00:02:32.798 [293/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.798 [294/738] Generating lib/rte_gro_mingw with a custom command 00:02:32.799 [295/738] Linking target lib/librte_ethdev.so.23.0 00:02:32.799 [296/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:33.057 [297/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:33.057 [298/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:33.057 [299/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:33.057 [300/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:33.057 [301/738] Linking target lib/librte_metrics.so.23.0 00:02:33.057 [302/738] Linking static target lib/librte_gpudev.a 00:02:33.057 [303/738] Linking target lib/librte_bpf.so.23.0 00:02:33.057 [304/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:33.057 [305/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:33.057 [306/738] Linking target lib/librte_bitratestats.so.23.0 00:02:33.057 [307/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:33.057 [308/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:33.057 [309/738] Linking static target lib/librte_gro.a 00:02:33.317 [310/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:33.317 [311/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:33.317 [312/738] Generating lib/rte_gso_def with a custom command 00:02:33.317 [313/738] Generating lib/rte_gso_mingw with a custom command 00:02:33.317 [314/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:33.317 [315/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.317 [316/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:33.317 [317/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:33.317 [318/738] Linking target lib/librte_gro.so.23.0 00:02:33.317 [319/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:33.575 [320/738] Linking static target lib/librte_gso.a 00:02:33.575 [321/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.575 [322/738] Linking target lib/librte_gpudev.so.23.0 00:02:33.575 [323/738] Generating lib/rte_ip_frag_def with a custom command 00:02:33.575 [324/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.575 [325/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:33.575 [326/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:33.575 [327/738] Linking target lib/librte_gso.so.23.0 00:02:33.575 [328/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:33.575 [329/738] Generating lib/rte_jobstats_def with a custom command 00:02:33.575 [330/738] Linking static target lib/librte_eventdev.a 00:02:33.575 [331/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:33.575 [332/738] Generating lib/rte_latencystats_def with a custom command 00:02:33.575 [333/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:33.575 [334/738] Linking static target lib/librte_jobstats.a 00:02:33.833 [335/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:33.833 [336/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:33.833 [337/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:33.833 [338/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:33.833 [339/738] Generating lib/rte_lpm_def with a custom command 00:02:33.833 [340/738] Generating lib/rte_lpm_mingw with a custom command 00:02:33.833 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:33.833 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:33.833 [343/738] Linking static target lib/librte_ip_frag.a 00:02:33.833 [344/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.833 [345/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.833 [346/738] Linking target lib/librte_jobstats.so.23.0 00:02:33.833 [347/738] Linking target lib/librte_cryptodev.so.23.0 00:02:34.092 [348/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:34.092 [349/738] Linking static target lib/librte_latencystats.a 00:02:34.092 [350/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:34.092 [351/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.092 [352/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:34.092 [353/738] Generating lib/rte_member_def with a custom command 00:02:34.092 [354/738] Linking target lib/librte_ip_frag.so.23.0 00:02:34.092 [355/738] Generating lib/rte_member_mingw with a custom command 00:02:34.092 [356/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.092 [357/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:34.092 [358/738] Linking target lib/librte_latencystats.so.23.0 00:02:34.092 [359/738] Generating lib/rte_pcapng_def with a custom command 00:02:34.092 [360/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:34.092 [361/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:34.350 [362/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:34.350 [363/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:34.350 [364/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:34.350 [365/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:34.350 [366/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:34.350 [367/738] Linking static target lib/librte_lpm.a 00:02:34.350 [368/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:34.608 [369/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:34.608 [370/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:34.608 [371/738] Linking static target lib/librte_pcapng.a 00:02:34.608 [372/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:34.608 [373/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:34.608 [374/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:34.608 [375/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.608 [376/738] Generating lib/rte_power_def with a custom command 00:02:34.608 [377/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:34.608 [378/738] Generating lib/rte_power_mingw with a custom command 00:02:34.608 [379/738] Generating lib/rte_rawdev_def with a custom command 00:02:34.608 [380/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:34.608 [381/738] Linking target lib/librte_lpm.so.23.0 00:02:34.608 [382/738] Generating lib/rte_regexdev_def with a custom command 00:02:34.608 [383/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:34.866 [384/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:34.866 [385/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:34.866 [386/738] Generating lib/rte_dmadev_def with a custom command 00:02:34.866 [387/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:34.866 [388/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.866 [389/738] Linking target lib/librte_pcapng.so.23.0 00:02:34.866 [390/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.866 [391/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:34.866 [392/738] Linking target lib/librte_eventdev.so.23.0 00:02:34.866 [393/738] Generating lib/rte_rib_def with a custom command 00:02:34.866 [394/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:34.866 [395/738] Generating lib/rte_rib_mingw with a custom command 00:02:34.866 [396/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:34.866 [397/738] Linking static target lib/librte_rawdev.a 00:02:34.866 [398/738] Generating lib/rte_reorder_def with a custom command 00:02:35.124 [399/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:35.124 [400/738] Linking static target lib/librte_power.a 00:02:35.124 [401/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:35.124 [402/738] Generating lib/rte_reorder_mingw with a custom command 00:02:35.124 [403/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:35.124 [404/738] Linking static target lib/librte_dmadev.a 00:02:35.124 [405/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:35.124 [406/738] Linking static target lib/librte_regexdev.a 00:02:35.124 [407/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:35.124 [408/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:35.124 [409/738] Linking static target lib/librte_member.a 00:02:35.124 [410/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:35.124 [411/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:35.381 [412/738] Generating lib/rte_sched_def with a custom command 00:02:35.381 [413/738] Generating lib/rte_sched_mingw with a custom command 00:02:35.381 [414/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.381 [415/738] Linking target lib/librte_rawdev.so.23.0 00:02:35.381 [416/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:35.381 [417/738] Generating lib/rte_security_def with a custom command 00:02:35.381 [418/738] Generating lib/rte_security_mingw with a custom command 00:02:35.381 [419/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:35.381 [420/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.381 [421/738] Linking static target lib/librte_reorder.a 00:02:35.381 [422/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:35.381 [423/738] Linking target lib/librte_dmadev.so.23.0 00:02:35.381 [424/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.381 [425/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:35.381 [426/738] Linking target lib/librte_member.so.23.0 00:02:35.381 [427/738] Generating lib/rte_stack_def with a custom command 00:02:35.381 [428/738] Generating lib/rte_stack_mingw with a custom command 00:02:35.381 [429/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:35.381 [430/738] Linking static target lib/librte_stack.a 00:02:35.381 [431/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:35.639 [432/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.639 [433/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.639 [434/738] Linking target lib/librte_regexdev.so.23.0 00:02:35.639 [435/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:35.639 [436/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:35.639 [437/738] Linking target lib/librte_reorder.so.23.0 00:02:35.639 [438/738] Linking static target lib/librte_rib.a 00:02:35.639 [439/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.639 [440/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.639 [441/738] Linking target lib/librte_power.so.23.0 00:02:35.639 [442/738] Linking target lib/librte_stack.so.23.0 00:02:35.901 [443/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:35.901 [444/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:35.901 [445/738] Linking static target lib/librte_security.a 00:02:35.901 [446/738] Generating lib/rte_vhost_def with a custom command 00:02:35.901 [447/738] Generating lib/rte_vhost_mingw with a custom command 00:02:35.901 [448/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.901 [449/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:35.901 [450/738] Linking target lib/librte_rib.so.23.0 00:02:35.901 [451/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:36.159 [452/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:36.159 [453/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.159 [454/738] Linking target lib/librte_security.so.23.0 00:02:36.159 [455/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:36.159 [456/738] Linking static target lib/librte_sched.a 00:02:36.425 [457/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:36.425 [458/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:36.425 [459/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:36.425 [460/738] Generating lib/rte_ipsec_def with a custom command 00:02:36.425 [461/738] Generating lib/rte_ipsec_mingw with a custom command 00:02:36.425 [462/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:36.746 [463/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.746 [464/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:36.746 [465/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:36.746 [466/738] Linking target lib/librte_sched.so.23.0 00:02:36.746 [467/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:36.746 [468/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:36.746 [469/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:36.746 [470/738] Generating lib/rte_fib_def with a custom command 00:02:36.746 [471/738] Generating lib/rte_fib_mingw with a custom command 00:02:36.746 [472/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:37.012 [473/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:37.012 [474/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:37.012 [475/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:37.270 [476/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:37.270 [477/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:37.270 [478/738] Linking static target lib/librte_fib.a 00:02:37.270 [479/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:37.270 [480/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:37.270 [481/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:37.270 [482/738] Linking static target lib/librte_ipsec.a 00:02:37.528 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:37.528 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:37.528 [485/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:37.528 [486/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.528 [487/738] Linking target lib/librte_fib.so.23.0 00:02:37.528 [488/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.528 [489/738] Linking target lib/librte_ipsec.so.23.0 00:02:37.786 [490/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:37.786 [491/738] Generating lib/rte_port_def with a custom command 00:02:37.786 [492/738] Generating lib/rte_port_mingw with a custom command 00:02:37.787 [493/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:37.787 [494/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:37.787 [495/738] Generating lib/rte_pdump_def with a custom command 00:02:37.787 [496/738] Generating lib/rte_pdump_mingw with a custom command 00:02:38.045 [497/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:38.045 [498/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:38.045 [499/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:38.045 [500/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:38.045 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:38.303 [502/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:38.303 [503/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:38.303 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:38.303 [505/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:38.303 [506/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:38.572 [507/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:38.572 [508/738] Linking static target lib/librte_port.a 00:02:38.572 [509/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:38.572 [510/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:38.572 [511/738] Linking static target lib/librte_pdump.a 00:02:38.572 [512/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:38.830 [513/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.830 [514/738] Linking target lib/librte_pdump.so.23.0 00:02:38.830 [515/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:38.830 [516/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:38.830 [517/738] Generating lib/rte_table_def with a custom command 00:02:38.830 [518/738] Generating lib/rte_table_mingw with a custom command 00:02:38.830 [519/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.830 [520/738] Linking target lib/librte_port.so.23.0 00:02:38.830 [521/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:39.088 [522/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:39.088 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:39.088 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:39.088 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:39.088 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:39.088 [527/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:39.088 [528/738] Linking static target lib/librte_table.a 00:02:39.088 [529/738] Generating lib/rte_pipeline_def with a custom command 00:02:39.088 [530/738] Generating lib/rte_pipeline_mingw with a custom command 00:02:39.346 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:39.346 [532/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:39.604 [533/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.605 [534/738] Linking target lib/librte_table.so.23.0 00:02:39.605 [535/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:39.605 [536/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:39.605 [537/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:39.862 [538/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:39.862 [539/738] Generating lib/rte_graph_def with a custom command 00:02:39.862 [540/738] Generating lib/rte_graph_mingw with a custom command 00:02:39.862 [541/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:39.862 [542/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:39.862 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:40.120 [544/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:40.120 [545/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:40.120 [546/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:40.120 [547/738] Linking static target lib/librte_graph.a 00:02:40.120 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:40.120 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:40.378 [550/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:40.378 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:40.378 [552/738] Generating lib/rte_node_def with a custom command 00:02:40.378 [553/738] Generating lib/rte_node_mingw with a custom command 00:02:40.378 [554/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:40.378 [555/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:40.636 [556/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:40.636 [557/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:40.636 [558/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:40.636 [559/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.636 [560/738] Linking target lib/librte_graph.so.23.0 00:02:40.636 [561/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:40.636 [562/738] Generating drivers/rte_bus_pci_def with a custom command 00:02:40.636 [563/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:40.636 [564/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:40.893 [565/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:40.893 [566/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:40.893 [567/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:40.893 [568/738] Generating drivers/rte_bus_vdev_def with a custom command 00:02:40.893 [569/738] Generating drivers/rte_mempool_ring_def with a custom command 00:02:40.893 [570/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:40.893 [571/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:40.893 [572/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:40.893 [573/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:40.893 [574/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:40.893 [575/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:40.893 [576/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:41.151 [577/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:41.151 [578/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.151 [579/738] Linking static target drivers/librte_bus_pci.a 00:02:41.151 [580/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:41.151 [581/738] Linking static target lib/librte_node.a 00:02:41.151 [582/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:41.151 [583/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.151 [584/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.151 [585/738] Linking static target drivers/librte_bus_vdev.a 00:02:41.151 [586/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.407 [587/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.407 [588/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.408 [589/738] Linking target lib/librte_node.so.23.0 00:02:41.408 [590/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.408 [591/738] Linking target drivers/librte_bus_vdev.so.23.0 00:02:41.408 [592/738] Linking target drivers/librte_bus_pci.so.23.0 00:02:41.408 [593/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:41.408 [594/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:41.408 [595/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:41.665 [596/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:41.665 [597/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:41.665 [598/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:41.665 [599/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:41.665 [600/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:41.665 [601/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.665 [602/738] Linking static target drivers/librte_mempool_ring.a 00:02:41.665 [603/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:41.665 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.665 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:02:41.923 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:42.180 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:42.438 [608/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:42.438 [609/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:42.438 [610/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:42.696 [611/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:42.954 [612/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:42.954 [613/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:42.954 [614/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:43.212 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:43.212 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:43.212 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:02:43.212 [618/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:43.212 [619/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:43.470 [620/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:43.728 [621/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:43.986 [622/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:43.986 [623/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:43.986 [624/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:43.986 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:44.244 [626/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:44.244 [627/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:44.244 [628/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:44.244 [629/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:44.503 [630/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:44.503 [631/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:44.503 [632/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:44.761 [633/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:44.761 [634/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:44.761 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:44.761 [636/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:44.761 [637/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:44.761 [638/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:45.018 [639/738] Linking static target drivers/librte_net_i40e.a 00:02:45.018 [640/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:45.018 [641/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:45.018 [642/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:45.018 [643/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:45.276 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:45.276 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:45.276 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:45.276 [647/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.534 [648/738] Linking target drivers/librte_net_i40e.so.23.0 00:02:45.534 [649/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:45.534 [650/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:45.793 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:45.793 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:45.793 [653/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:45.793 [654/738] Linking static target lib/librte_vhost.a 00:02:45.793 [655/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:45.793 [656/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:45.793 [657/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:46.055 [658/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:46.055 [659/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:46.055 [660/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:46.055 [661/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:46.055 [662/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:46.055 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:46.313 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:46.313 [665/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:46.572 [666/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:46.572 [667/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.572 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:46.572 [669/738] Linking target lib/librte_vhost.so.23.0 00:02:46.831 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:47.089 [671/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:47.089 [672/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:47.089 [673/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:47.089 [674/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:47.089 [675/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:47.347 [676/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:47.347 [677/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:47.347 [678/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:47.347 [679/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:47.347 [680/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:47.347 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:47.347 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:47.605 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:47.605 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:47.605 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:47.864 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:47.864 [687/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:47.864 [688/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:47.864 [689/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:48.123 [690/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:48.123 [691/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:48.123 [692/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:48.123 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:48.382 [694/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:48.382 [695/738] Linking static target lib/librte_pipeline.a 00:02:48.382 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:48.382 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:48.640 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:48.640 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:48.640 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:48.640 [701/738] Linking target app/dpdk-pdump 00:02:48.640 [702/738] Linking target app/dpdk-dumpcap 00:02:48.900 [703/738] Linking target app/dpdk-proc-info 00:02:48.900 [704/738] Linking target app/dpdk-test-acl 00:02:48.900 [705/738] Linking target app/dpdk-test-bbdev 00:02:48.900 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:49.158 [707/738] Linking target app/dpdk-test-cmdline 00:02:49.158 [708/738] Linking target app/dpdk-test-compress-perf 00:02:49.158 [709/738] Linking target app/dpdk-test-crypto-perf 00:02:49.158 [710/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:49.158 [711/738] Linking target app/dpdk-test-fib 00:02:49.158 [712/738] Linking target app/dpdk-test-eventdev 00:02:49.158 [713/738] Linking target app/dpdk-test-flow-perf 00:02:49.416 [714/738] Linking target app/dpdk-test-pipeline 00:02:49.416 [715/738] Linking target app/dpdk-test-gpudev 00:02:49.416 [716/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:49.674 [717/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:49.674 [718/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:49.674 [719/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:49.674 [720/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:49.932 [721/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:49.932 [722/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:49.932 [723/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:50.191 [724/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.191 [725/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:50.191 [726/738] Linking target lib/librte_pipeline.so.23.0 00:02:50.191 [727/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:50.192 [728/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:50.450 [729/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:50.450 [730/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:50.450 [731/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:50.450 [732/738] Linking target app/dpdk-test-sad 00:02:50.450 [733/738] Linking target app/dpdk-test-regex 00:02:50.450 [734/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:50.707 [735/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:50.966 [736/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:50.966 [737/738] Linking target app/dpdk-testpmd 00:02:51.225 [738/738] Linking target app/dpdk-test-security-perf 00:02:51.225 04:52:20 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:51.225 04:52:20 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:51.225 04:52:20 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:51.225 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:51.225 [0/1] Installing files. 00:02:51.486 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.486 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:51.487 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.488 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.489 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:51.490 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:51.491 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:51.491 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:51.491 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.491 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.752 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:51.753 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:51.753 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:51.753 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.753 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:51.753 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.753 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.754 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.755 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:51.756 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:51.756 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:51.756 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:51.756 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:51.756 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:51.756 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:51.756 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:51.756 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:51.756 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:51.756 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:51.756 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:51.756 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:51.756 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:51.756 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:51.756 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:51.756 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:51.756 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:51.756 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:51.756 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:51.756 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:51.756 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:51.756 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:51.756 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:51.756 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:51.756 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:51.756 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:51.756 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:51.756 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:51.756 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:51.756 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:51.756 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:51.756 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:51.756 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:51.756 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:51.756 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:51.756 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:51.756 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:51.756 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:51.756 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:51.756 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:51.756 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:51.756 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:51.756 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:51.756 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:51.756 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:51.756 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:51.756 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:51.756 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:51.756 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:51.756 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:51.756 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:51.756 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:51.756 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:51.756 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:51.756 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:51.756 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:51.756 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:51.756 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:51.756 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:51.756 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:51.756 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:51.756 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:51.756 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:51.757 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:51.757 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:51.757 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:51.757 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:51.757 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:51.757 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:51.757 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:51.757 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:51.757 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:51.757 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:51.757 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:51.757 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:51.757 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:51.757 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:51.757 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:51.757 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:51.757 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:51.757 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:51.757 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:51.757 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:51.757 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:51.757 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:51.757 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:51.757 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:51.757 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:51.757 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:51.757 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:51.757 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:51.757 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:51.757 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:51.757 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:51.757 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:51.757 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:51.757 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:51.757 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:51.757 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:51.757 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:51.757 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:51.757 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:51.757 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:51.757 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:51.757 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:51.757 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:51.757 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:51.757 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:51.757 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:51.757 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:51.757 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:51.757 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:51.757 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:51.757 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:51.757 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:51.757 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:51.757 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:51.757 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:51.757 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:51.757 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:51.757 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:51.757 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:51.757 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:51.757 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:51.757 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:51.757 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:51.757 04:52:20 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:51.757 04:52:20 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:51.757 00:02:51.757 real 0m32.510s 00:02:51.757 user 3m39.436s 00:02:51.757 sys 0m33.579s 00:02:51.757 04:52:20 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:51.757 ************************************ 00:02:51.757 END TEST build_native_dpdk 00:02:51.757 ************************************ 00:02:51.757 04:52:20 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:51.757 04:52:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:51.757 04:52:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:51.757 04:52:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:51.757 04:52:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:51.757 04:52:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:51.757 04:52:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:51.757 04:52:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:51.757 04:52:20 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:02:52.016 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:52.016 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.016 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:52.016 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:52.274 Using 'verbs' RDMA provider 00:03:03.206 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:13.196 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:13.761 Creating mk/config.mk...done. 00:03:13.761 Creating mk/cc.flags.mk...done. 00:03:13.761 Type 'make' to build. 00:03:13.761 04:52:42 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:13.761 04:52:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:13.761 04:52:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:13.761 04:52:42 -- common/autotest_common.sh@10 -- $ set +x 00:03:13.761 ************************************ 00:03:13.761 START TEST make 00:03:13.761 ************************************ 00:03:13.761 04:52:42 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:14.019 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:14.019 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:14.019 meson setup builddir \ 00:03:14.019 -Dwith-libaio=enabled \ 00:03:14.019 -Dwith-liburing=enabled \ 00:03:14.019 -Dwith-libvfn=disabled \ 00:03:14.019 -Dwith-spdk=disabled \ 00:03:14.019 -Dexamples=false \ 00:03:14.019 -Dtests=false \ 00:03:14.019 -Dtools=false && \ 00:03:14.019 meson compile -C builddir && \ 00:03:14.019 cd -) 00:03:14.019 make[1]: Nothing to be done for 'all'. 00:03:15.934 The Meson build system 00:03:15.934 Version: 1.5.0 00:03:15.934 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:15.934 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:15.934 Build type: native build 00:03:15.934 Project name: xnvme 00:03:15.934 Project version: 0.7.5 00:03:15.934 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:15.934 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:15.934 Host machine cpu family: x86_64 00:03:15.934 Host machine cpu: x86_64 00:03:15.934 Message: host_machine.system: linux 00:03:15.934 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:15.934 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:15.934 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:15.934 Run-time dependency threads found: YES 00:03:15.934 Has header "setupapi.h" : NO 00:03:15.934 Has header "linux/blkzoned.h" : YES 00:03:15.934 Has header "linux/blkzoned.h" : YES (cached) 00:03:15.934 Has header "libaio.h" : YES 00:03:15.934 Library aio found: YES 00:03:15.934 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:15.934 Run-time dependency liburing found: YES 2.2 00:03:15.934 Dependency libvfn skipped: feature with-libvfn disabled 00:03:15.934 Found CMake: /usr/bin/cmake (3.27.7) 00:03:15.934 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:15.934 Subproject spdk : skipped: feature with-spdk disabled 00:03:15.934 Run-time dependency appleframeworks found: NO (tried framework) 00:03:15.934 Run-time dependency appleframeworks found: NO (tried framework) 00:03:15.934 Library rt found: YES 00:03:15.934 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:15.934 Configuring xnvme_config.h using configuration 00:03:15.934 Configuring xnvme.spec using configuration 00:03:15.934 Run-time dependency bash-completion found: YES 2.11 00:03:15.934 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:15.934 Program cp found: YES (/usr/bin/cp) 00:03:15.934 Build targets in project: 3 00:03:15.934 00:03:15.934 xnvme 0.7.5 00:03:15.934 00:03:15.934 Subprojects 00:03:15.934 spdk : NO Feature 'with-spdk' disabled 00:03:15.934 00:03:15.934 User defined options 00:03:15.934 examples : false 00:03:15.934 tests : false 00:03:15.934 tools : false 00:03:15.934 with-libaio : enabled 00:03:15.934 with-liburing: enabled 00:03:15.934 with-libvfn : disabled 00:03:15.934 with-spdk : disabled 00:03:15.934 00:03:15.934 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:16.203 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:16.203 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:16.203 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:16.203 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:16.203 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:16.203 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:16.203 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:16.203 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:16.203 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:16.203 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:16.203 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:16.203 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:16.203 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:16.203 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:16.203 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:16.203 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:16.203 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:16.461 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:16.461 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:16.461 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:16.461 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:16.461 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:16.461 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:16.461 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:16.461 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:16.461 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:16.461 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:16.461 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:16.461 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:16.461 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:16.461 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:16.461 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:16.461 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:16.461 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:16.461 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:16.461 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:16.461 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:16.461 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:16.461 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:16.461 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:16.461 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:16.461 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:16.461 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:16.461 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:16.461 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:16.461 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:16.461 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:16.461 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:16.461 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:16.461 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:16.461 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:16.461 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:16.461 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:16.461 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:16.720 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:16.720 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:16.720 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:16.720 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:16.720 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:16.720 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:16.720 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:16.720 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:16.720 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:16.720 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:16.720 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:16.720 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:16.720 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:16.720 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:16.720 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:16.720 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:16.720 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:16.720 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:16.720 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:16.979 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:17.237 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:17.237 [75/76] Linking static target lib/libxnvme.a 00:03:17.237 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:17.237 INFO: autodetecting backend as ninja 00:03:17.237 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:17.237 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:49.300 CC lib/ut_mock/mock.o 00:03:49.300 CC lib/log/log.o 00:03:49.300 CC lib/log/log_deprecated.o 00:03:49.300 CC lib/ut/ut.o 00:03:49.300 CC lib/log/log_flags.o 00:03:49.300 LIB libspdk_log.a 00:03:49.300 LIB libspdk_ut_mock.a 00:03:49.300 LIB libspdk_ut.a 00:03:49.300 SO libspdk_log.so.7.1 00:03:49.300 SO libspdk_ut_mock.so.6.0 00:03:49.300 SO libspdk_ut.so.2.0 00:03:49.300 SYMLINK libspdk_ut.so 00:03:49.300 SYMLINK libspdk_ut_mock.so 00:03:49.300 SYMLINK libspdk_log.so 00:03:49.300 CC lib/dma/dma.o 00:03:49.300 CC lib/ioat/ioat.o 00:03:49.300 CXX lib/trace_parser/trace.o 00:03:49.300 CC lib/util/base64.o 00:03:49.300 CC lib/util/bit_array.o 00:03:49.300 CC lib/util/cpuset.o 00:03:49.300 CC lib/util/crc32.o 00:03:49.300 CC lib/util/crc32c.o 00:03:49.300 CC lib/util/crc16.o 00:03:49.300 CC lib/vfio_user/host/vfio_user_pci.o 00:03:49.300 CC lib/util/crc32_ieee.o 00:03:49.300 CC lib/util/crc64.o 00:03:49.300 CC lib/util/dif.o 00:03:49.300 LIB libspdk_dma.a 00:03:49.300 SO libspdk_dma.so.5.0 00:03:49.300 CC lib/vfio_user/host/vfio_user.o 00:03:49.300 CC lib/util/fd.o 00:03:49.300 CC lib/util/fd_group.o 00:03:49.300 CC lib/util/file.o 00:03:49.300 SYMLINK libspdk_dma.so 00:03:49.300 CC lib/util/hexlify.o 00:03:49.300 LIB libspdk_ioat.a 00:03:49.300 CC lib/util/iov.o 00:03:49.300 SO libspdk_ioat.so.7.0 00:03:49.300 CC lib/util/math.o 00:03:49.300 SYMLINK libspdk_ioat.so 00:03:49.300 CC lib/util/net.o 00:03:49.300 CC lib/util/pipe.o 00:03:49.300 CC lib/util/strerror_tls.o 00:03:49.300 CC lib/util/string.o 00:03:49.300 LIB libspdk_vfio_user.a 00:03:49.300 SO libspdk_vfio_user.so.5.0 00:03:49.300 CC lib/util/uuid.o 00:03:49.300 CC lib/util/xor.o 00:03:49.300 CC lib/util/zipf.o 00:03:49.300 SYMLINK libspdk_vfio_user.so 00:03:49.300 CC lib/util/md5.o 00:03:49.300 LIB libspdk_util.a 00:03:49.300 SO libspdk_util.so.10.1 00:03:49.300 LIB libspdk_trace_parser.a 00:03:49.300 SO libspdk_trace_parser.so.6.0 00:03:49.300 SYMLINK libspdk_util.so 00:03:49.300 SYMLINK libspdk_trace_parser.so 00:03:49.300 CC lib/env_dpdk/env.o 00:03:49.300 CC lib/env_dpdk/memory.o 00:03:49.300 CC lib/env_dpdk/init.o 00:03:49.300 CC lib/env_dpdk/pci.o 00:03:49.300 CC lib/env_dpdk/threads.o 00:03:49.300 CC lib/rdma_utils/rdma_utils.o 00:03:49.300 CC lib/json/json_parse.o 00:03:49.300 CC lib/vmd/vmd.o 00:03:49.300 CC lib/idxd/idxd.o 00:03:49.300 CC lib/conf/conf.o 00:03:49.300 CC lib/env_dpdk/pci_ioat.o 00:03:49.300 LIB libspdk_rdma_utils.a 00:03:49.300 SO libspdk_rdma_utils.so.1.0 00:03:49.300 CC lib/json/json_util.o 00:03:49.300 LIB libspdk_conf.a 00:03:49.300 SO libspdk_conf.so.6.0 00:03:49.300 CC lib/json/json_write.o 00:03:49.300 SYMLINK libspdk_rdma_utils.so 00:03:49.300 CC lib/idxd/idxd_user.o 00:03:49.300 SYMLINK libspdk_conf.so 00:03:49.300 CC lib/env_dpdk/pci_virtio.o 00:03:49.300 CC lib/env_dpdk/pci_vmd.o 00:03:49.300 CC lib/env_dpdk/pci_idxd.o 00:03:49.300 CC lib/env_dpdk/pci_event.o 00:03:49.300 CC lib/rdma_provider/common.o 00:03:49.300 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:49.300 CC lib/env_dpdk/sigbus_handler.o 00:03:49.300 CC lib/vmd/led.o 00:03:49.300 CC lib/idxd/idxd_kernel.o 00:03:49.300 CC lib/env_dpdk/pci_dpdk.o 00:03:49.300 LIB libspdk_json.a 00:03:49.300 SO libspdk_json.so.6.0 00:03:49.300 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:49.300 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:49.300 SYMLINK libspdk_json.so 00:03:49.300 LIB libspdk_rdma_provider.a 00:03:49.300 SO libspdk_rdma_provider.so.7.0 00:03:49.300 LIB libspdk_idxd.a 00:03:49.300 LIB libspdk_vmd.a 00:03:49.300 SO libspdk_idxd.so.12.1 00:03:49.300 SYMLINK libspdk_rdma_provider.so 00:03:49.300 SO libspdk_vmd.so.6.0 00:03:49.300 SYMLINK libspdk_idxd.so 00:03:49.300 SYMLINK libspdk_vmd.so 00:03:49.300 CC lib/jsonrpc/jsonrpc_server.o 00:03:49.300 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:49.300 CC lib/jsonrpc/jsonrpc_client.o 00:03:49.300 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:49.300 LIB libspdk_jsonrpc.a 00:03:49.300 SO libspdk_jsonrpc.so.6.0 00:03:49.300 SYMLINK libspdk_jsonrpc.so 00:03:49.300 CC lib/rpc/rpc.o 00:03:49.300 LIB libspdk_env_dpdk.a 00:03:49.300 SO libspdk_env_dpdk.so.15.1 00:03:49.300 LIB libspdk_rpc.a 00:03:49.300 SO libspdk_rpc.so.6.0 00:03:49.300 SYMLINK libspdk_env_dpdk.so 00:03:49.300 SYMLINK libspdk_rpc.so 00:03:49.300 CC lib/trace/trace_flags.o 00:03:49.300 CC lib/trace/trace.o 00:03:49.300 CC lib/trace/trace_rpc.o 00:03:49.300 CC lib/keyring/keyring.o 00:03:49.300 CC lib/notify/notify.o 00:03:49.300 CC lib/notify/notify_rpc.o 00:03:49.300 CC lib/keyring/keyring_rpc.o 00:03:49.561 LIB libspdk_notify.a 00:03:49.561 SO libspdk_notify.so.6.0 00:03:49.561 LIB libspdk_keyring.a 00:03:49.561 SYMLINK libspdk_notify.so 00:03:49.561 LIB libspdk_trace.a 00:03:49.561 SO libspdk_keyring.so.2.0 00:03:49.561 SO libspdk_trace.so.11.0 00:03:49.561 SYMLINK libspdk_keyring.so 00:03:49.561 SYMLINK libspdk_trace.so 00:03:49.822 CC lib/thread/iobuf.o 00:03:49.823 CC lib/thread/thread.o 00:03:49.823 CC lib/sock/sock.o 00:03:49.823 CC lib/sock/sock_rpc.o 00:03:50.084 LIB libspdk_sock.a 00:03:50.353 SO libspdk_sock.so.10.0 00:03:50.353 SYMLINK libspdk_sock.so 00:03:50.636 CC lib/nvme/nvme_ctrlr.o 00:03:50.636 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:50.636 CC lib/nvme/nvme_ns_cmd.o 00:03:50.636 CC lib/nvme/nvme_ns.o 00:03:50.636 CC lib/nvme/nvme_fabric.o 00:03:50.636 CC lib/nvme/nvme_pcie.o 00:03:50.636 CC lib/nvme/nvme_qpair.o 00:03:50.636 CC lib/nvme/nvme_pcie_common.o 00:03:50.636 CC lib/nvme/nvme.o 00:03:50.900 CC lib/nvme/nvme_quirks.o 00:03:51.159 CC lib/nvme/nvme_transport.o 00:03:51.159 CC lib/nvme/nvme_discovery.o 00:03:51.159 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:51.159 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:51.159 LIB libspdk_thread.a 00:03:51.417 SO libspdk_thread.so.11.0 00:03:51.417 CC lib/nvme/nvme_tcp.o 00:03:51.417 CC lib/nvme/nvme_opal.o 00:03:51.417 SYMLINK libspdk_thread.so 00:03:51.417 CC lib/nvme/nvme_io_msg.o 00:03:51.417 CC lib/nvme/nvme_poll_group.o 00:03:51.417 CC lib/nvme/nvme_zns.o 00:03:51.676 CC lib/nvme/nvme_stubs.o 00:03:51.676 CC lib/nvme/nvme_auth.o 00:03:51.934 CC lib/init/json_config.o 00:03:51.934 CC lib/blob/blobstore.o 00:03:51.934 CC lib/accel/accel.o 00:03:51.934 CC lib/blob/request.o 00:03:51.934 CC lib/blob/zeroes.o 00:03:51.934 CC lib/blob/blob_bs_dev.o 00:03:51.934 CC lib/init/subsystem.o 00:03:52.192 CC lib/init/subsystem_rpc.o 00:03:52.192 CC lib/init/rpc.o 00:03:52.192 CC lib/nvme/nvme_cuse.o 00:03:52.192 CC lib/nvme/nvme_rdma.o 00:03:52.192 LIB libspdk_init.a 00:03:52.192 CC lib/virtio/virtio.o 00:03:52.192 SO libspdk_init.so.6.0 00:03:52.450 CC lib/fsdev/fsdev.o 00:03:52.450 SYMLINK libspdk_init.so 00:03:52.450 CC lib/virtio/virtio_vhost_user.o 00:03:52.450 CC lib/virtio/virtio_vfio_user.o 00:03:52.450 CC lib/virtio/virtio_pci.o 00:03:52.708 CC lib/accel/accel_rpc.o 00:03:52.708 CC lib/fsdev/fsdev_io.o 00:03:52.708 LIB libspdk_virtio.a 00:03:52.708 CC lib/event/app.o 00:03:52.708 SO libspdk_virtio.so.7.0 00:03:52.708 CC lib/event/reactor.o 00:03:52.708 SYMLINK libspdk_virtio.so 00:03:52.708 CC lib/event/log_rpc.o 00:03:52.708 CC lib/event/app_rpc.o 00:03:52.966 CC lib/event/scheduler_static.o 00:03:52.966 CC lib/fsdev/fsdev_rpc.o 00:03:52.966 CC lib/accel/accel_sw.o 00:03:52.966 LIB libspdk_fsdev.a 00:03:53.224 SO libspdk_fsdev.so.2.0 00:03:53.224 SYMLINK libspdk_fsdev.so 00:03:53.224 LIB libspdk_event.a 00:03:53.224 SO libspdk_event.so.14.0 00:03:53.224 LIB libspdk_accel.a 00:03:53.224 SO libspdk_accel.so.16.0 00:03:53.224 SYMLINK libspdk_event.so 00:03:53.482 SYMLINK libspdk_accel.so 00:03:53.482 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:53.482 LIB libspdk_nvme.a 00:03:53.482 CC lib/bdev/bdev.o 00:03:53.482 CC lib/bdev/bdev_zone.o 00:03:53.482 CC lib/bdev/scsi_nvme.o 00:03:53.482 CC lib/bdev/bdev_rpc.o 00:03:53.482 CC lib/bdev/part.o 00:03:53.740 SO libspdk_nvme.so.15.0 00:03:53.740 LIB libspdk_fuse_dispatcher.a 00:03:53.998 SO libspdk_fuse_dispatcher.so.1.0 00:03:53.998 SYMLINK libspdk_nvme.so 00:03:53.998 SYMLINK libspdk_fuse_dispatcher.so 00:03:54.938 LIB libspdk_blob.a 00:03:54.938 SO libspdk_blob.so.12.0 00:03:54.938 SYMLINK libspdk_blob.so 00:03:55.199 CC lib/lvol/lvol.o 00:03:55.199 CC lib/blobfs/blobfs.o 00:03:55.199 CC lib/blobfs/tree.o 00:03:56.138 LIB libspdk_blobfs.a 00:03:56.138 SO libspdk_blobfs.so.11.0 00:03:56.138 SYMLINK libspdk_blobfs.so 00:03:56.138 LIB libspdk_lvol.a 00:03:56.138 SO libspdk_lvol.so.11.0 00:03:56.138 LIB libspdk_bdev.a 00:03:56.397 SYMLINK libspdk_lvol.so 00:03:56.397 SO libspdk_bdev.so.17.0 00:03:56.397 SYMLINK libspdk_bdev.so 00:03:56.656 CC lib/ublk/ublk.o 00:03:56.656 CC lib/ublk/ublk_rpc.o 00:03:56.656 CC lib/nvmf/ctrlr.o 00:03:56.656 CC lib/scsi/dev.o 00:03:56.656 CC lib/nvmf/ctrlr_discovery.o 00:03:56.656 CC lib/scsi/lun.o 00:03:56.656 CC lib/scsi/port.o 00:03:56.656 CC lib/nbd/nbd.o 00:03:56.656 CC lib/nvmf/ctrlr_bdev.o 00:03:56.656 CC lib/ftl/ftl_core.o 00:03:56.656 CC lib/nvmf/subsystem.o 00:03:56.656 CC lib/nvmf/nvmf.o 00:03:56.915 CC lib/nvmf/nvmf_rpc.o 00:03:56.915 CC lib/nbd/nbd_rpc.o 00:03:56.915 CC lib/scsi/scsi.o 00:03:56.915 CC lib/ftl/ftl_init.o 00:03:56.915 LIB libspdk_nbd.a 00:03:56.915 SO libspdk_nbd.so.7.0 00:03:56.915 CC lib/scsi/scsi_bdev.o 00:03:56.915 CC lib/scsi/scsi_pr.o 00:03:56.915 SYMLINK libspdk_nbd.so 00:03:57.174 CC lib/nvmf/transport.o 00:03:57.174 CC lib/ftl/ftl_layout.o 00:03:57.174 LIB libspdk_ublk.a 00:03:57.174 SO libspdk_ublk.so.3.0 00:03:57.174 CC lib/nvmf/tcp.o 00:03:57.174 SYMLINK libspdk_ublk.so 00:03:57.174 CC lib/nvmf/stubs.o 00:03:57.433 CC lib/nvmf/mdns_server.o 00:03:57.433 CC lib/ftl/ftl_debug.o 00:03:57.433 CC lib/scsi/scsi_rpc.o 00:03:57.433 CC lib/nvmf/rdma.o 00:03:57.433 CC lib/scsi/task.o 00:03:57.691 CC lib/nvmf/auth.o 00:03:57.691 CC lib/ftl/ftl_io.o 00:03:57.691 CC lib/ftl/ftl_sb.o 00:03:57.691 CC lib/ftl/ftl_l2p.o 00:03:57.691 CC lib/ftl/ftl_l2p_flat.o 00:03:57.691 LIB libspdk_scsi.a 00:03:57.691 CC lib/ftl/ftl_nv_cache.o 00:03:57.691 SO libspdk_scsi.so.9.0 00:03:57.949 CC lib/ftl/ftl_band.o 00:03:57.949 CC lib/ftl/ftl_band_ops.o 00:03:57.950 SYMLINK libspdk_scsi.so 00:03:57.950 CC lib/ftl/ftl_writer.o 00:03:57.950 CC lib/ftl/ftl_rq.o 00:03:57.950 CC lib/ftl/ftl_reloc.o 00:03:58.208 CC lib/ftl/ftl_l2p_cache.o 00:03:58.208 CC lib/ftl/ftl_p2l.o 00:03:58.208 CC lib/ftl/ftl_p2l_log.o 00:03:58.208 CC lib/ftl/mngt/ftl_mngt.o 00:03:58.208 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:58.208 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:58.466 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:58.466 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:58.466 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:58.466 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:58.466 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:58.466 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:58.466 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:58.724 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:58.724 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:58.724 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:58.724 CC lib/ftl/utils/ftl_conf.o 00:03:58.724 CC lib/ftl/utils/ftl_md.o 00:03:58.724 CC lib/ftl/utils/ftl_mempool.o 00:03:58.724 CC lib/ftl/utils/ftl_bitmap.o 00:03:58.724 CC lib/ftl/utils/ftl_property.o 00:03:58.724 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:58.982 CC lib/iscsi/conn.o 00:03:58.982 CC lib/vhost/vhost.o 00:03:58.982 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:58.982 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:58.982 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:58.982 CC lib/iscsi/init_grp.o 00:03:58.982 CC lib/vhost/vhost_rpc.o 00:03:58.982 CC lib/iscsi/iscsi.o 00:03:58.982 CC lib/vhost/vhost_scsi.o 00:03:59.239 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:59.239 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:59.239 LIB libspdk_nvmf.a 00:03:59.239 CC lib/iscsi/param.o 00:03:59.239 SO libspdk_nvmf.so.20.0 00:03:59.239 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:59.239 CC lib/iscsi/portal_grp.o 00:03:59.239 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:59.239 CC lib/iscsi/tgt_node.o 00:03:59.496 CC lib/vhost/vhost_blk.o 00:03:59.496 CC lib/iscsi/iscsi_subsystem.o 00:03:59.496 SYMLINK libspdk_nvmf.so 00:03:59.496 CC lib/iscsi/iscsi_rpc.o 00:03:59.496 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:59.496 CC lib/vhost/rte_vhost_user.o 00:03:59.496 CC lib/iscsi/task.o 00:03:59.755 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:59.755 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:59.755 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:59.755 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:59.755 CC lib/ftl/base/ftl_base_dev.o 00:03:59.755 CC lib/ftl/base/ftl_base_bdev.o 00:03:59.755 CC lib/ftl/ftl_trace.o 00:04:00.014 LIB libspdk_ftl.a 00:04:00.272 SO libspdk_ftl.so.9.0 00:04:00.530 LIB libspdk_vhost.a 00:04:00.530 SYMLINK libspdk_ftl.so 00:04:00.530 LIB libspdk_iscsi.a 00:04:00.530 SO libspdk_vhost.so.8.0 00:04:00.530 SYMLINK libspdk_vhost.so 00:04:00.530 SO libspdk_iscsi.so.8.0 00:04:00.788 SYMLINK libspdk_iscsi.so 00:04:01.046 CC module/env_dpdk/env_dpdk_rpc.o 00:04:01.046 CC module/accel/dsa/accel_dsa.o 00:04:01.046 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:01.046 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:01.046 CC module/accel/ioat/accel_ioat.o 00:04:01.046 CC module/fsdev/aio/fsdev_aio.o 00:04:01.046 CC module/sock/posix/posix.o 00:04:01.046 CC module/keyring/file/keyring.o 00:04:01.046 CC module/blob/bdev/blob_bdev.o 00:04:01.046 CC module/accel/error/accel_error.o 00:04:01.046 LIB libspdk_env_dpdk_rpc.a 00:04:01.046 SO libspdk_env_dpdk_rpc.so.6.0 00:04:01.046 SYMLINK libspdk_env_dpdk_rpc.so 00:04:01.046 CC module/keyring/file/keyring_rpc.o 00:04:01.046 LIB libspdk_scheduler_dynamic.a 00:04:01.046 LIB libspdk_scheduler_dpdk_governor.a 00:04:01.305 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:01.305 SO libspdk_scheduler_dynamic.so.4.0 00:04:01.305 CC module/accel/ioat/accel_ioat_rpc.o 00:04:01.305 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:01.305 CC module/accel/error/accel_error_rpc.o 00:04:01.305 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:01.305 SYMLINK libspdk_scheduler_dynamic.so 00:04:01.305 LIB libspdk_keyring_file.a 00:04:01.305 CC module/accel/dsa/accel_dsa_rpc.o 00:04:01.305 CC module/keyring/linux/keyring.o 00:04:01.305 SO libspdk_keyring_file.so.2.0 00:04:01.305 LIB libspdk_blob_bdev.a 00:04:01.305 LIB libspdk_accel_ioat.a 00:04:01.305 SO libspdk_blob_bdev.so.12.0 00:04:01.305 SO libspdk_accel_ioat.so.6.0 00:04:01.305 SYMLINK libspdk_keyring_file.so 00:04:01.305 CC module/scheduler/gscheduler/gscheduler.o 00:04:01.305 CC module/keyring/linux/keyring_rpc.o 00:04:01.305 CC module/fsdev/aio/linux_aio_mgr.o 00:04:01.305 LIB libspdk_accel_error.a 00:04:01.305 SYMLINK libspdk_blob_bdev.so 00:04:01.305 SYMLINK libspdk_accel_ioat.so 00:04:01.305 LIB libspdk_accel_dsa.a 00:04:01.305 SO libspdk_accel_error.so.2.0 00:04:01.305 SO libspdk_accel_dsa.so.5.0 00:04:01.563 SYMLINK libspdk_accel_error.so 00:04:01.563 LIB libspdk_keyring_linux.a 00:04:01.563 SYMLINK libspdk_accel_dsa.so 00:04:01.563 LIB libspdk_scheduler_gscheduler.a 00:04:01.563 SO libspdk_keyring_linux.so.1.0 00:04:01.563 SO libspdk_scheduler_gscheduler.so.4.0 00:04:01.563 CC module/accel/iaa/accel_iaa.o 00:04:01.563 SYMLINK libspdk_keyring_linux.so 00:04:01.563 CC module/accel/iaa/accel_iaa_rpc.o 00:04:01.563 SYMLINK libspdk_scheduler_gscheduler.so 00:04:01.563 LIB libspdk_fsdev_aio.a 00:04:01.563 LIB libspdk_sock_posix.a 00:04:01.563 CC module/bdev/gpt/gpt.o 00:04:01.563 CC module/bdev/error/vbdev_error.o 00:04:01.563 SO libspdk_fsdev_aio.so.1.0 00:04:01.563 SO libspdk_sock_posix.so.6.0 00:04:01.563 CC module/blobfs/bdev/blobfs_bdev.o 00:04:01.563 CC module/bdev/delay/vbdev_delay.o 00:04:01.563 SYMLINK libspdk_fsdev_aio.so 00:04:01.822 CC module/bdev/lvol/vbdev_lvol.o 00:04:01.822 CC module/bdev/malloc/bdev_malloc.o 00:04:01.822 SYMLINK libspdk_sock_posix.so 00:04:01.822 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:01.822 LIB libspdk_accel_iaa.a 00:04:01.822 SO libspdk_accel_iaa.so.3.0 00:04:01.822 CC module/bdev/gpt/vbdev_gpt.o 00:04:01.822 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:01.822 CC module/bdev/null/bdev_null.o 00:04:01.822 CC module/bdev/nvme/bdev_nvme.o 00:04:01.822 SYMLINK libspdk_accel_iaa.so 00:04:01.822 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:01.822 CC module/bdev/error/vbdev_error_rpc.o 00:04:01.822 CC module/bdev/nvme/nvme_rpc.o 00:04:02.080 LIB libspdk_blobfs_bdev.a 00:04:02.080 SO libspdk_blobfs_bdev.so.6.0 00:04:02.080 LIB libspdk_bdev_delay.a 00:04:02.080 LIB libspdk_bdev_error.a 00:04:02.080 SO libspdk_bdev_delay.so.6.0 00:04:02.080 SO libspdk_bdev_error.so.6.0 00:04:02.080 LIB libspdk_bdev_gpt.a 00:04:02.080 SYMLINK libspdk_blobfs_bdev.so 00:04:02.080 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:02.080 CC module/bdev/null/bdev_null_rpc.o 00:04:02.080 SO libspdk_bdev_gpt.so.6.0 00:04:02.080 SYMLINK libspdk_bdev_error.so 00:04:02.080 SYMLINK libspdk_bdev_delay.so 00:04:02.080 CC module/bdev/nvme/bdev_mdns_client.o 00:04:02.080 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:02.080 CC module/bdev/nvme/vbdev_opal.o 00:04:02.080 SYMLINK libspdk_bdev_gpt.so 00:04:02.339 CC module/bdev/passthru/vbdev_passthru.o 00:04:02.339 LIB libspdk_bdev_malloc.a 00:04:02.339 LIB libspdk_bdev_null.a 00:04:02.339 SO libspdk_bdev_malloc.so.6.0 00:04:02.339 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:02.339 SO libspdk_bdev_null.so.6.0 00:04:02.339 CC module/bdev/raid/bdev_raid.o 00:04:02.339 SYMLINK libspdk_bdev_malloc.so 00:04:02.339 CC module/bdev/raid/bdev_raid_rpc.o 00:04:02.339 SYMLINK libspdk_bdev_null.so 00:04:02.339 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:02.339 CC module/bdev/split/vbdev_split.o 00:04:02.339 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:02.339 LIB libspdk_bdev_lvol.a 00:04:02.597 CC module/bdev/raid/bdev_raid_sb.o 00:04:02.597 SO libspdk_bdev_lvol.so.6.0 00:04:02.597 CC module/bdev/split/vbdev_split_rpc.o 00:04:02.597 LIB libspdk_bdev_passthru.a 00:04:02.597 SO libspdk_bdev_passthru.so.6.0 00:04:02.597 CC module/bdev/raid/raid0.o 00:04:02.597 SYMLINK libspdk_bdev_lvol.so 00:04:02.597 SYMLINK libspdk_bdev_passthru.so 00:04:02.597 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:02.597 LIB libspdk_bdev_split.a 00:04:02.597 CC module/bdev/xnvme/bdev_xnvme.o 00:04:02.597 CC module/bdev/aio/bdev_aio.o 00:04:02.597 SO libspdk_bdev_split.so.6.0 00:04:02.597 CC module/bdev/ftl/bdev_ftl.o 00:04:02.855 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:02.855 SYMLINK libspdk_bdev_split.so 00:04:02.855 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:02.855 CC module/bdev/raid/raid1.o 00:04:02.855 CC module/bdev/iscsi/bdev_iscsi.o 00:04:02.855 CC module/bdev/raid/concat.o 00:04:02.855 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:02.855 LIB libspdk_bdev_xnvme.a 00:04:02.855 LIB libspdk_bdev_zone_block.a 00:04:02.855 SO libspdk_bdev_xnvme.so.3.0 00:04:02.855 SO libspdk_bdev_zone_block.so.6.0 00:04:03.114 SYMLINK libspdk_bdev_xnvme.so 00:04:03.114 CC module/bdev/aio/bdev_aio_rpc.o 00:04:03.114 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:03.114 SYMLINK libspdk_bdev_zone_block.so 00:04:03.114 LIB libspdk_bdev_ftl.a 00:04:03.114 SO libspdk_bdev_ftl.so.6.0 00:04:03.114 LIB libspdk_bdev_aio.a 00:04:03.114 SYMLINK libspdk_bdev_ftl.so 00:04:03.114 SO libspdk_bdev_aio.so.6.0 00:04:03.114 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:03.114 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:03.114 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:03.114 LIB libspdk_bdev_iscsi.a 00:04:03.114 SO libspdk_bdev_iscsi.so.6.0 00:04:03.114 SYMLINK libspdk_bdev_aio.so 00:04:03.372 SYMLINK libspdk_bdev_iscsi.so 00:04:03.372 LIB libspdk_bdev_raid.a 00:04:03.372 SO libspdk_bdev_raid.so.6.0 00:04:03.372 SYMLINK libspdk_bdev_raid.so 00:04:03.630 LIB libspdk_bdev_virtio.a 00:04:03.630 SO libspdk_bdev_virtio.so.6.0 00:04:03.630 SYMLINK libspdk_bdev_virtio.so 00:04:04.565 LIB libspdk_bdev_nvme.a 00:04:04.565 SO libspdk_bdev_nvme.so.7.1 00:04:04.565 SYMLINK libspdk_bdev_nvme.so 00:04:05.133 CC module/event/subsystems/fsdev/fsdev.o 00:04:05.133 CC module/event/subsystems/iobuf/iobuf.o 00:04:05.133 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:05.133 CC module/event/subsystems/keyring/keyring.o 00:04:05.133 CC module/event/subsystems/scheduler/scheduler.o 00:04:05.133 CC module/event/subsystems/sock/sock.o 00:04:05.133 CC module/event/subsystems/vmd/vmd.o 00:04:05.133 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:05.133 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:05.133 LIB libspdk_event_keyring.a 00:04:05.133 LIB libspdk_event_fsdev.a 00:04:05.133 LIB libspdk_event_scheduler.a 00:04:05.133 LIB libspdk_event_sock.a 00:04:05.133 LIB libspdk_event_vmd.a 00:04:05.133 LIB libspdk_event_vhost_blk.a 00:04:05.133 SO libspdk_event_keyring.so.1.0 00:04:05.133 LIB libspdk_event_iobuf.a 00:04:05.133 SO libspdk_event_fsdev.so.1.0 00:04:05.133 SO libspdk_event_scheduler.so.4.0 00:04:05.133 SO libspdk_event_sock.so.5.0 00:04:05.133 SO libspdk_event_vhost_blk.so.3.0 00:04:05.133 SO libspdk_event_vmd.so.6.0 00:04:05.133 SO libspdk_event_iobuf.so.3.0 00:04:05.133 SYMLINK libspdk_event_keyring.so 00:04:05.133 SYMLINK libspdk_event_fsdev.so 00:04:05.133 SYMLINK libspdk_event_vhost_blk.so 00:04:05.133 SYMLINK libspdk_event_sock.so 00:04:05.133 SYMLINK libspdk_event_scheduler.so 00:04:05.133 SYMLINK libspdk_event_vmd.so 00:04:05.133 SYMLINK libspdk_event_iobuf.so 00:04:05.395 CC module/event/subsystems/accel/accel.o 00:04:05.395 LIB libspdk_event_accel.a 00:04:05.657 SO libspdk_event_accel.so.6.0 00:04:05.657 SYMLINK libspdk_event_accel.so 00:04:05.922 CC module/event/subsystems/bdev/bdev.o 00:04:05.922 LIB libspdk_event_bdev.a 00:04:05.922 SO libspdk_event_bdev.so.6.0 00:04:05.922 SYMLINK libspdk_event_bdev.so 00:04:06.186 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:06.186 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:06.186 CC module/event/subsystems/nbd/nbd.o 00:04:06.186 CC module/event/subsystems/scsi/scsi.o 00:04:06.186 CC module/event/subsystems/ublk/ublk.o 00:04:06.186 LIB libspdk_event_nbd.a 00:04:06.186 SO libspdk_event_nbd.so.6.0 00:04:06.445 LIB libspdk_event_ublk.a 00:04:06.445 SO libspdk_event_ublk.so.3.0 00:04:06.445 LIB libspdk_event_scsi.a 00:04:06.445 SYMLINK libspdk_event_nbd.so 00:04:06.445 SO libspdk_event_scsi.so.6.0 00:04:06.445 LIB libspdk_event_nvmf.a 00:04:06.445 SYMLINK libspdk_event_ublk.so 00:04:06.445 SYMLINK libspdk_event_scsi.so 00:04:06.445 SO libspdk_event_nvmf.so.6.0 00:04:06.445 SYMLINK libspdk_event_nvmf.so 00:04:06.445 CC module/event/subsystems/iscsi/iscsi.o 00:04:06.706 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:06.706 LIB libspdk_event_vhost_scsi.a 00:04:06.706 SO libspdk_event_vhost_scsi.so.3.0 00:04:06.706 LIB libspdk_event_iscsi.a 00:04:06.706 SO libspdk_event_iscsi.so.6.0 00:04:06.706 SYMLINK libspdk_event_vhost_scsi.so 00:04:06.706 SYMLINK libspdk_event_iscsi.so 00:04:06.966 SO libspdk.so.6.0 00:04:06.966 SYMLINK libspdk.so 00:04:06.966 CC test/rpc_client/rpc_client_test.o 00:04:07.225 CXX app/trace/trace.o 00:04:07.225 TEST_HEADER include/spdk/accel.h 00:04:07.225 CC app/trace_record/trace_record.o 00:04:07.225 TEST_HEADER include/spdk/accel_module.h 00:04:07.225 TEST_HEADER include/spdk/assert.h 00:04:07.225 TEST_HEADER include/spdk/barrier.h 00:04:07.225 TEST_HEADER include/spdk/base64.h 00:04:07.225 TEST_HEADER include/spdk/bdev.h 00:04:07.225 TEST_HEADER include/spdk/bdev_module.h 00:04:07.225 TEST_HEADER include/spdk/bdev_zone.h 00:04:07.225 TEST_HEADER include/spdk/bit_array.h 00:04:07.225 TEST_HEADER include/spdk/bit_pool.h 00:04:07.225 CC app/nvmf_tgt/nvmf_main.o 00:04:07.225 TEST_HEADER include/spdk/blob_bdev.h 00:04:07.225 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:07.225 TEST_HEADER include/spdk/blobfs.h 00:04:07.225 TEST_HEADER include/spdk/blob.h 00:04:07.225 TEST_HEADER include/spdk/conf.h 00:04:07.225 TEST_HEADER include/spdk/config.h 00:04:07.225 TEST_HEADER include/spdk/cpuset.h 00:04:07.225 TEST_HEADER include/spdk/crc16.h 00:04:07.225 TEST_HEADER include/spdk/crc32.h 00:04:07.225 TEST_HEADER include/spdk/crc64.h 00:04:07.225 TEST_HEADER include/spdk/dif.h 00:04:07.225 TEST_HEADER include/spdk/dma.h 00:04:07.225 TEST_HEADER include/spdk/endian.h 00:04:07.225 TEST_HEADER include/spdk/env_dpdk.h 00:04:07.225 TEST_HEADER include/spdk/env.h 00:04:07.225 TEST_HEADER include/spdk/event.h 00:04:07.225 TEST_HEADER include/spdk/fd_group.h 00:04:07.225 TEST_HEADER include/spdk/fd.h 00:04:07.225 CC test/thread/poller_perf/poller_perf.o 00:04:07.225 TEST_HEADER include/spdk/file.h 00:04:07.225 TEST_HEADER include/spdk/fsdev.h 00:04:07.225 TEST_HEADER include/spdk/fsdev_module.h 00:04:07.225 TEST_HEADER include/spdk/ftl.h 00:04:07.225 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:07.225 TEST_HEADER include/spdk/gpt_spec.h 00:04:07.225 TEST_HEADER include/spdk/hexlify.h 00:04:07.225 TEST_HEADER include/spdk/histogram_data.h 00:04:07.225 TEST_HEADER include/spdk/idxd.h 00:04:07.225 TEST_HEADER include/spdk/idxd_spec.h 00:04:07.225 TEST_HEADER include/spdk/init.h 00:04:07.225 TEST_HEADER include/spdk/ioat.h 00:04:07.225 TEST_HEADER include/spdk/ioat_spec.h 00:04:07.225 TEST_HEADER include/spdk/iscsi_spec.h 00:04:07.225 TEST_HEADER include/spdk/json.h 00:04:07.225 CC examples/util/zipf/zipf.o 00:04:07.225 TEST_HEADER include/spdk/jsonrpc.h 00:04:07.225 TEST_HEADER include/spdk/keyring.h 00:04:07.225 TEST_HEADER include/spdk/keyring_module.h 00:04:07.225 TEST_HEADER include/spdk/likely.h 00:04:07.225 TEST_HEADER include/spdk/log.h 00:04:07.225 TEST_HEADER include/spdk/lvol.h 00:04:07.225 CC test/dma/test_dma/test_dma.o 00:04:07.225 TEST_HEADER include/spdk/md5.h 00:04:07.225 TEST_HEADER include/spdk/memory.h 00:04:07.225 TEST_HEADER include/spdk/mmio.h 00:04:07.225 TEST_HEADER include/spdk/nbd.h 00:04:07.225 TEST_HEADER include/spdk/net.h 00:04:07.225 CC test/app/bdev_svc/bdev_svc.o 00:04:07.225 TEST_HEADER include/spdk/notify.h 00:04:07.225 TEST_HEADER include/spdk/nvme.h 00:04:07.225 TEST_HEADER include/spdk/nvme_intel.h 00:04:07.225 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:07.225 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:07.225 TEST_HEADER include/spdk/nvme_spec.h 00:04:07.225 TEST_HEADER include/spdk/nvme_zns.h 00:04:07.225 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:07.225 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:07.225 TEST_HEADER include/spdk/nvmf.h 00:04:07.225 TEST_HEADER include/spdk/nvmf_spec.h 00:04:07.225 TEST_HEADER include/spdk/nvmf_transport.h 00:04:07.225 TEST_HEADER include/spdk/opal.h 00:04:07.225 TEST_HEADER include/spdk/opal_spec.h 00:04:07.225 TEST_HEADER include/spdk/pci_ids.h 00:04:07.225 TEST_HEADER include/spdk/pipe.h 00:04:07.225 TEST_HEADER include/spdk/queue.h 00:04:07.225 TEST_HEADER include/spdk/reduce.h 00:04:07.225 TEST_HEADER include/spdk/rpc.h 00:04:07.225 TEST_HEADER include/spdk/scheduler.h 00:04:07.225 TEST_HEADER include/spdk/scsi.h 00:04:07.225 TEST_HEADER include/spdk/scsi_spec.h 00:04:07.225 TEST_HEADER include/spdk/sock.h 00:04:07.225 TEST_HEADER include/spdk/stdinc.h 00:04:07.225 TEST_HEADER include/spdk/string.h 00:04:07.225 TEST_HEADER include/spdk/thread.h 00:04:07.225 TEST_HEADER include/spdk/trace.h 00:04:07.225 TEST_HEADER include/spdk/trace_parser.h 00:04:07.225 CC test/env/mem_callbacks/mem_callbacks.o 00:04:07.225 TEST_HEADER include/spdk/tree.h 00:04:07.225 TEST_HEADER include/spdk/ublk.h 00:04:07.225 TEST_HEADER include/spdk/util.h 00:04:07.225 TEST_HEADER include/spdk/uuid.h 00:04:07.225 TEST_HEADER include/spdk/version.h 00:04:07.225 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:07.225 LINK rpc_client_test 00:04:07.225 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:07.225 TEST_HEADER include/spdk/vhost.h 00:04:07.225 TEST_HEADER include/spdk/vmd.h 00:04:07.225 TEST_HEADER include/spdk/xor.h 00:04:07.225 TEST_HEADER include/spdk/zipf.h 00:04:07.225 CXX test/cpp_headers/accel.o 00:04:07.225 LINK zipf 00:04:07.225 LINK nvmf_tgt 00:04:07.225 LINK poller_perf 00:04:07.225 LINK spdk_trace_record 00:04:07.485 LINK bdev_svc 00:04:07.485 CXX test/cpp_headers/accel_module.o 00:04:07.485 LINK mem_callbacks 00:04:07.485 LINK spdk_trace 00:04:07.485 CC test/env/vtophys/vtophys.o 00:04:07.485 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:07.485 CXX test/cpp_headers/assert.o 00:04:07.485 CC examples/ioat/perf/perf.o 00:04:07.485 CC examples/idxd/perf/perf.o 00:04:07.485 CC examples/vmd/lsvmd/lsvmd.o 00:04:07.485 CC examples/vmd/led/led.o 00:04:07.485 LINK vtophys 00:04:07.485 LINK env_dpdk_post_init 00:04:07.744 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:07.744 LINK test_dma 00:04:07.744 CXX test/cpp_headers/barrier.o 00:04:07.744 LINK led 00:04:07.744 CC app/iscsi_tgt/iscsi_tgt.o 00:04:07.744 LINK lsvmd 00:04:07.744 CXX test/cpp_headers/base64.o 00:04:07.744 LINK ioat_perf 00:04:07.744 CXX test/cpp_headers/bdev.o 00:04:07.744 CC test/env/memory/memory_ut.o 00:04:07.744 CXX test/cpp_headers/bdev_module.o 00:04:07.745 LINK iscsi_tgt 00:04:07.745 LINK idxd_perf 00:04:08.004 CC examples/ioat/verify/verify.o 00:04:08.004 CC test/event/event_perf/event_perf.o 00:04:08.004 CC app/spdk_tgt/spdk_tgt.o 00:04:08.004 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:08.004 CXX test/cpp_headers/bdev_zone.o 00:04:08.004 LINK nvme_fuzz 00:04:08.004 LINK event_perf 00:04:08.004 LINK verify 00:04:08.004 CC examples/thread/thread/thread_ex.o 00:04:08.004 LINK spdk_tgt 00:04:08.004 CC test/nvme/aer/aer.o 00:04:08.004 LINK interrupt_tgt 00:04:08.004 CC examples/sock/hello_world/hello_sock.o 00:04:08.263 CXX test/cpp_headers/bit_array.o 00:04:08.263 CXX test/cpp_headers/bit_pool.o 00:04:08.263 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:08.263 CC test/event/reactor/reactor.o 00:04:08.263 CXX test/cpp_headers/blob_bdev.o 00:04:08.263 CC app/spdk_lspci/spdk_lspci.o 00:04:08.263 LINK reactor 00:04:08.263 LINK thread 00:04:08.263 LINK aer 00:04:08.263 LINK hello_sock 00:04:08.574 CXX test/cpp_headers/blobfs_bdev.o 00:04:08.574 LINK memory_ut 00:04:08.574 LINK spdk_lspci 00:04:08.574 CC test/accel/dif/dif.o 00:04:08.574 CC test/event/reactor_perf/reactor_perf.o 00:04:08.574 CC test/blobfs/mkfs/mkfs.o 00:04:08.574 CC test/nvme/reset/reset.o 00:04:08.574 CXX test/cpp_headers/blobfs.o 00:04:08.574 CC test/nvme/sgl/sgl.o 00:04:08.574 LINK reactor_perf 00:04:08.574 CC app/spdk_nvme_perf/perf.o 00:04:08.574 CC test/env/pci/pci_ut.o 00:04:08.574 CC examples/accel/perf/accel_perf.o 00:04:08.834 CXX test/cpp_headers/blob.o 00:04:08.834 LINK mkfs 00:04:08.834 CC test/event/app_repeat/app_repeat.o 00:04:08.834 LINK reset 00:04:08.834 LINK sgl 00:04:08.834 CXX test/cpp_headers/conf.o 00:04:08.834 LINK app_repeat 00:04:09.093 LINK pci_ut 00:04:09.093 CXX test/cpp_headers/config.o 00:04:09.093 CC test/nvme/e2edp/nvme_dp.o 00:04:09.093 CXX test/cpp_headers/cpuset.o 00:04:09.093 CC test/event/scheduler/scheduler.o 00:04:09.093 CC examples/blob/hello_world/hello_blob.o 00:04:09.093 CXX test/cpp_headers/crc16.o 00:04:09.093 LINK dif 00:04:09.093 LINK accel_perf 00:04:09.093 CXX test/cpp_headers/crc32.o 00:04:09.093 LINK scheduler 00:04:09.352 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:09.352 LINK nvme_dp 00:04:09.352 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:09.352 LINK hello_blob 00:04:09.352 CC examples/nvme/hello_world/hello_world.o 00:04:09.352 CXX test/cpp_headers/crc64.o 00:04:09.352 CXX test/cpp_headers/dif.o 00:04:09.352 CXX test/cpp_headers/dma.o 00:04:09.352 CC test/nvme/overhead/overhead.o 00:04:09.352 LINK spdk_nvme_perf 00:04:09.352 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:09.352 CXX test/cpp_headers/endian.o 00:04:09.352 CC examples/blob/cli/blobcli.o 00:04:09.610 LINK hello_world 00:04:09.610 LINK vhost_fuzz 00:04:09.610 CC test/app/histogram_perf/histogram_perf.o 00:04:09.610 CXX test/cpp_headers/env_dpdk.o 00:04:09.610 CC examples/bdev/hello_world/hello_bdev.o 00:04:09.610 CC app/spdk_nvme_identify/identify.o 00:04:09.610 LINK hello_fsdev 00:04:09.610 LINK histogram_perf 00:04:09.610 CC examples/nvme/reconnect/reconnect.o 00:04:09.610 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:09.610 LINK overhead 00:04:09.869 CXX test/cpp_headers/env.o 00:04:09.869 LINK hello_bdev 00:04:09.869 CXX test/cpp_headers/event.o 00:04:09.869 LINK blobcli 00:04:09.869 LINK iscsi_fuzz 00:04:09.869 CC examples/nvme/arbitration/arbitration.o 00:04:09.869 CXX test/cpp_headers/fd_group.o 00:04:09.869 CC test/nvme/err_injection/err_injection.o 00:04:09.869 LINK reconnect 00:04:09.869 CC examples/bdev/bdevperf/bdevperf.o 00:04:10.128 CC examples/nvme/hotplug/hotplug.o 00:04:10.128 CXX test/cpp_headers/fd.o 00:04:10.128 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:10.128 LINK err_injection 00:04:10.128 CC test/app/jsoncat/jsoncat.o 00:04:10.128 CC examples/nvme/abort/abort.o 00:04:10.128 LINK arbitration 00:04:10.128 CXX test/cpp_headers/file.o 00:04:10.128 LINK hotplug 00:04:10.128 LINK nvme_manage 00:04:10.128 LINK jsoncat 00:04:10.128 LINK cmb_copy 00:04:10.387 LINK spdk_nvme_identify 00:04:10.387 CC test/nvme/startup/startup.o 00:04:10.387 CXX test/cpp_headers/fsdev.o 00:04:10.387 CXX test/cpp_headers/fsdev_module.o 00:04:10.387 CXX test/cpp_headers/ftl.o 00:04:10.387 LINK startup 00:04:10.387 CC test/app/stub/stub.o 00:04:10.387 CC app/spdk_nvme_discover/discovery_aer.o 00:04:10.387 LINK abort 00:04:10.387 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:10.387 CC test/bdev/bdevio/bdevio.o 00:04:10.387 CXX test/cpp_headers/fuse_dispatcher.o 00:04:10.387 CC test/lvol/esnap/esnap.o 00:04:10.645 CXX test/cpp_headers/gpt_spec.o 00:04:10.645 CXX test/cpp_headers/hexlify.o 00:04:10.645 LINK stub 00:04:10.645 CC test/nvme/reserve/reserve.o 00:04:10.645 LINK spdk_nvme_discover 00:04:10.645 LINK pmr_persistence 00:04:10.645 CXX test/cpp_headers/histogram_data.o 00:04:10.645 CXX test/cpp_headers/idxd.o 00:04:10.645 CXX test/cpp_headers/idxd_spec.o 00:04:10.645 CXX test/cpp_headers/init.o 00:04:10.645 LINK reserve 00:04:10.645 CXX test/cpp_headers/ioat.o 00:04:10.645 CXX test/cpp_headers/ioat_spec.o 00:04:10.903 CXX test/cpp_headers/iscsi_spec.o 00:04:10.903 LINK bdevio 00:04:10.903 CC app/spdk_top/spdk_top.o 00:04:10.903 CXX test/cpp_headers/json.o 00:04:10.903 LINK bdevperf 00:04:10.903 CXX test/cpp_headers/jsonrpc.o 00:04:10.903 CXX test/cpp_headers/keyring.o 00:04:10.903 CXX test/cpp_headers/keyring_module.o 00:04:10.903 CC test/nvme/simple_copy/simple_copy.o 00:04:10.903 CC test/nvme/connect_stress/connect_stress.o 00:04:10.903 CC app/vhost/vhost.o 00:04:10.903 CXX test/cpp_headers/likely.o 00:04:11.163 CC test/nvme/boot_partition/boot_partition.o 00:04:11.163 CC app/spdk_dd/spdk_dd.o 00:04:11.163 LINK vhost 00:04:11.163 LINK simple_copy 00:04:11.163 LINK connect_stress 00:04:11.163 CC app/fio/nvme/fio_plugin.o 00:04:11.163 CC examples/nvmf/nvmf/nvmf.o 00:04:11.163 CXX test/cpp_headers/log.o 00:04:11.163 LINK boot_partition 00:04:11.163 CXX test/cpp_headers/lvol.o 00:04:11.421 CXX test/cpp_headers/md5.o 00:04:11.421 CXX test/cpp_headers/memory.o 00:04:11.421 CC app/fio/bdev/fio_plugin.o 00:04:11.421 LINK spdk_dd 00:04:11.421 CC test/nvme/compliance/nvme_compliance.o 00:04:11.421 LINK nvmf 00:04:11.421 CXX test/cpp_headers/mmio.o 00:04:11.421 CC test/nvme/fused_ordering/fused_ordering.o 00:04:11.678 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:11.678 LINK spdk_nvme 00:04:11.678 CC test/nvme/fdp/fdp.o 00:04:11.678 CXX test/cpp_headers/nbd.o 00:04:11.678 CXX test/cpp_headers/net.o 00:04:11.678 CXX test/cpp_headers/notify.o 00:04:11.678 CC test/nvme/cuse/cuse.o 00:04:11.678 LINK fused_ordering 00:04:11.678 LINK spdk_top 00:04:11.678 LINK doorbell_aers 00:04:11.678 LINK nvme_compliance 00:04:11.678 CXX test/cpp_headers/nvme.o 00:04:11.937 CXX test/cpp_headers/nvme_intel.o 00:04:11.937 CXX test/cpp_headers/nvme_ocssd.o 00:04:11.937 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:11.937 CXX test/cpp_headers/nvme_spec.o 00:04:11.937 LINK spdk_bdev 00:04:11.937 LINK fdp 00:04:11.937 CXX test/cpp_headers/nvme_zns.o 00:04:11.937 CXX test/cpp_headers/nvmf_cmd.o 00:04:11.937 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:11.937 CXX test/cpp_headers/nvmf.o 00:04:11.937 CXX test/cpp_headers/nvmf_spec.o 00:04:11.937 CXX test/cpp_headers/nvmf_transport.o 00:04:11.937 CXX test/cpp_headers/opal.o 00:04:11.937 CXX test/cpp_headers/opal_spec.o 00:04:11.937 CXX test/cpp_headers/pci_ids.o 00:04:12.196 CXX test/cpp_headers/pipe.o 00:04:12.196 CXX test/cpp_headers/queue.o 00:04:12.196 CXX test/cpp_headers/reduce.o 00:04:12.196 CXX test/cpp_headers/rpc.o 00:04:12.196 CXX test/cpp_headers/scheduler.o 00:04:12.196 CXX test/cpp_headers/scsi.o 00:04:12.196 CXX test/cpp_headers/scsi_spec.o 00:04:12.196 CXX test/cpp_headers/sock.o 00:04:12.196 CXX test/cpp_headers/stdinc.o 00:04:12.196 CXX test/cpp_headers/string.o 00:04:12.196 CXX test/cpp_headers/thread.o 00:04:12.196 CXX test/cpp_headers/trace.o 00:04:12.196 CXX test/cpp_headers/trace_parser.o 00:04:12.196 CXX test/cpp_headers/tree.o 00:04:12.196 CXX test/cpp_headers/ublk.o 00:04:12.196 CXX test/cpp_headers/util.o 00:04:12.196 CXX test/cpp_headers/uuid.o 00:04:12.455 CXX test/cpp_headers/version.o 00:04:12.455 CXX test/cpp_headers/vfio_user_pci.o 00:04:12.455 CXX test/cpp_headers/vfio_user_spec.o 00:04:12.455 CXX test/cpp_headers/vhost.o 00:04:12.455 CXX test/cpp_headers/vmd.o 00:04:12.455 CXX test/cpp_headers/xor.o 00:04:12.455 CXX test/cpp_headers/zipf.o 00:04:13.022 LINK cuse 00:04:14.923 LINK esnap 00:04:15.182 00:04:15.182 real 1m1.653s 00:04:15.182 user 5m3.759s 00:04:15.182 sys 0m50.805s 00:04:15.182 04:53:44 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:15.182 ************************************ 00:04:15.182 END TEST make 00:04:15.182 ************************************ 00:04:15.182 04:53:44 make -- common/autotest_common.sh@10 -- $ set +x 00:04:15.441 04:53:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:15.441 04:53:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:15.441 04:53:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:15.441 04:53:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.441 04:53:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:15.441 04:53:44 -- pm/common@44 -- $ pid=5809 00:04:15.441 04:53:44 -- pm/common@50 -- $ kill -TERM 5809 00:04:15.441 04:53:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.441 04:53:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:15.441 04:53:44 -- pm/common@44 -- $ pid=5810 00:04:15.441 04:53:44 -- pm/common@50 -- $ kill -TERM 5810 00:04:15.441 04:53:44 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:15.441 04:53:44 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:15.441 04:53:44 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.441 04:53:44 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.441 04:53:44 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.441 04:53:44 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.441 04:53:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.441 04:53:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.441 04:53:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.441 04:53:44 -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.441 04:53:44 -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.441 04:53:44 -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.441 04:53:44 -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.441 04:53:44 -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.441 04:53:44 -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.441 04:53:44 -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.441 04:53:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.441 04:53:44 -- scripts/common.sh@344 -- # case "$op" in 00:04:15.441 04:53:44 -- scripts/common.sh@345 -- # : 1 00:04:15.441 04:53:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.441 04:53:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.441 04:53:44 -- scripts/common.sh@365 -- # decimal 1 00:04:15.441 04:53:44 -- scripts/common.sh@353 -- # local d=1 00:04:15.441 04:53:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.441 04:53:44 -- scripts/common.sh@355 -- # echo 1 00:04:15.441 04:53:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.441 04:53:44 -- scripts/common.sh@366 -- # decimal 2 00:04:15.441 04:53:44 -- scripts/common.sh@353 -- # local d=2 00:04:15.441 04:53:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.441 04:53:44 -- scripts/common.sh@355 -- # echo 2 00:04:15.441 04:53:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.441 04:53:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.442 04:53:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.442 04:53:44 -- scripts/common.sh@368 -- # return 0 00:04:15.442 04:53:44 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.442 04:53:44 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.442 --rc genhtml_branch_coverage=1 00:04:15.442 --rc genhtml_function_coverage=1 00:04:15.442 --rc genhtml_legend=1 00:04:15.442 --rc geninfo_all_blocks=1 00:04:15.442 --rc geninfo_unexecuted_blocks=1 00:04:15.442 00:04:15.442 ' 00:04:15.442 04:53:44 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.442 --rc genhtml_branch_coverage=1 00:04:15.442 --rc genhtml_function_coverage=1 00:04:15.442 --rc genhtml_legend=1 00:04:15.442 --rc geninfo_all_blocks=1 00:04:15.442 --rc geninfo_unexecuted_blocks=1 00:04:15.442 00:04:15.442 ' 00:04:15.442 04:53:44 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.442 --rc genhtml_branch_coverage=1 00:04:15.442 --rc genhtml_function_coverage=1 00:04:15.442 --rc genhtml_legend=1 00:04:15.442 --rc geninfo_all_blocks=1 00:04:15.442 --rc geninfo_unexecuted_blocks=1 00:04:15.442 00:04:15.442 ' 00:04:15.442 04:53:44 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.442 --rc genhtml_branch_coverage=1 00:04:15.442 --rc genhtml_function_coverage=1 00:04:15.442 --rc genhtml_legend=1 00:04:15.442 --rc geninfo_all_blocks=1 00:04:15.442 --rc geninfo_unexecuted_blocks=1 00:04:15.442 00:04:15.442 ' 00:04:15.442 04:53:44 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:15.442 04:53:44 -- nvmf/common.sh@7 -- # uname -s 00:04:15.442 04:53:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:15.442 04:53:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:15.442 04:53:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:15.442 04:53:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:15.442 04:53:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:15.442 04:53:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:15.442 04:53:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:15.442 04:53:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:15.442 04:53:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:15.442 04:53:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:15.442 04:53:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3e346057-9c48-4fc2-acf8-735d220de68f 00:04:15.442 04:53:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=3e346057-9c48-4fc2-acf8-735d220de68f 00:04:15.442 04:53:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:15.442 04:53:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:15.442 04:53:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:15.442 04:53:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:15.442 04:53:44 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:15.442 04:53:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:15.442 04:53:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:15.442 04:53:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:15.442 04:53:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:15.442 04:53:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.442 04:53:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.442 04:53:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.442 04:53:44 -- paths/export.sh@5 -- # export PATH 00:04:15.442 04:53:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.442 04:53:44 -- nvmf/common.sh@51 -- # : 0 00:04:15.442 04:53:44 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:15.442 04:53:44 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:15.442 04:53:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:15.442 04:53:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:15.442 04:53:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:15.442 04:53:44 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:15.442 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:15.442 04:53:44 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:15.442 04:53:44 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:15.442 04:53:44 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:15.442 04:53:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:15.442 04:53:44 -- spdk/autotest.sh@32 -- # uname -s 00:04:15.442 04:53:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:15.442 04:53:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:15.442 04:53:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:15.442 04:53:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:15.442 04:53:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:15.442 04:53:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:15.442 04:53:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:15.442 04:53:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:15.442 04:53:44 -- spdk/autotest.sh@48 -- # udevadm_pid=66211 00:04:15.442 04:53:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:15.442 04:53:44 -- pm/common@17 -- # local monitor 00:04:15.442 04:53:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.442 04:53:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:15.442 04:53:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:15.442 04:53:44 -- pm/common@25 -- # sleep 1 00:04:15.442 04:53:44 -- pm/common@21 -- # date +%s 00:04:15.442 04:53:44 -- pm/common@21 -- # date +%s 00:04:15.442 04:53:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732769624 00:04:15.442 04:53:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732769624 00:04:15.442 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732769624_collect-vmstat.pm.log 00:04:15.701 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732769624_collect-cpu-load.pm.log 00:04:16.635 04:53:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:16.635 04:53:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:16.635 04:53:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.635 04:53:45 -- common/autotest_common.sh@10 -- # set +x 00:04:16.635 04:53:45 -- spdk/autotest.sh@59 -- # create_test_list 00:04:16.635 04:53:45 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:16.635 04:53:45 -- common/autotest_common.sh@10 -- # set +x 00:04:16.635 04:53:45 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:16.635 04:53:45 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:16.635 04:53:45 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:16.635 04:53:45 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:16.635 04:53:45 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:16.635 04:53:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:16.635 04:53:45 -- common/autotest_common.sh@1457 -- # uname 00:04:16.635 04:53:45 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:16.635 04:53:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:16.635 04:53:45 -- common/autotest_common.sh@1477 -- # uname 00:04:16.635 04:53:45 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:16.635 04:53:45 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:16.635 04:53:45 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:16.635 lcov: LCOV version 1.15 00:04:16.635 04:53:45 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:31.516 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:31.516 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:46.394 04:54:14 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:46.394 04:54:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.394 04:54:14 -- common/autotest_common.sh@10 -- # set +x 00:04:46.394 04:54:14 -- spdk/autotest.sh@78 -- # rm -f 00:04:46.394 04:54:14 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:46.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.394 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:46.394 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:46.394 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:46.394 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:46.394 04:54:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:46.394 04:54:15 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:46.394 04:54:15 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:46.394 04:54:15 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:46.394 04:54:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.394 04:54:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:46.394 04:54:15 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:46.394 04:54:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.394 04:54:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:46.394 04:54:15 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:46.394 04:54:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.394 04:54:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:46.394 04:54:15 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:46.394 04:54:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.394 04:54:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:46.394 04:54:15 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:46.394 04:54:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.394 04:54:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:46.394 04:54:15 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:46.394 04:54:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.394 04:54:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:46.394 04:54:15 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:46.394 04:54:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:46.394 04:54:15 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:46.394 04:54:15 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:46.394 04:54:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:46.394 04:54:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:46.394 04:54:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:46.394 04:54:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.394 04:54:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.394 04:54:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:46.394 04:54:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:46.394 04:54:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:46.394 No valid GPT data, bailing 00:04:46.394 04:54:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:46.394 04:54:15 -- scripts/common.sh@394 -- # pt= 00:04:46.394 04:54:15 -- scripts/common.sh@395 -- # return 1 00:04:46.394 04:54:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:46.394 1+0 records in 00:04:46.394 1+0 records out 00:04:46.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0344997 s, 30.4 MB/s 00:04:46.394 04:54:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.394 04:54:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.394 04:54:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:46.394 04:54:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:46.394 04:54:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:46.656 No valid GPT data, bailing 00:04:46.656 04:54:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:46.656 04:54:15 -- scripts/common.sh@394 -- # pt= 00:04:46.656 04:54:15 -- scripts/common.sh@395 -- # return 1 00:04:46.656 04:54:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:46.656 1+0 records in 00:04:46.656 1+0 records out 00:04:46.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00479185 s, 219 MB/s 00:04:46.657 04:54:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.657 04:54:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.657 04:54:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:46.657 04:54:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:46.657 04:54:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:46.657 No valid GPT data, bailing 00:04:46.657 04:54:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:46.657 04:54:15 -- scripts/common.sh@394 -- # pt= 00:04:46.657 04:54:15 -- scripts/common.sh@395 -- # return 1 00:04:46.657 04:54:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:46.657 1+0 records in 00:04:46.657 1+0 records out 00:04:46.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00395831 s, 265 MB/s 00:04:46.657 04:54:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.657 04:54:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.657 04:54:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:46.657 04:54:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:46.657 04:54:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:46.657 No valid GPT data, bailing 00:04:46.657 04:54:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:46.657 04:54:15 -- scripts/common.sh@394 -- # pt= 00:04:46.657 04:54:15 -- scripts/common.sh@395 -- # return 1 00:04:46.657 04:54:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:46.657 1+0 records in 00:04:46.657 1+0 records out 00:04:46.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00589524 s, 178 MB/s 00:04:46.657 04:54:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.657 04:54:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.657 04:54:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:46.657 04:54:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:46.657 04:54:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:46.919 No valid GPT data, bailing 00:04:46.919 04:54:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:46.919 04:54:15 -- scripts/common.sh@394 -- # pt= 00:04:46.919 04:54:15 -- scripts/common.sh@395 -- # return 1 00:04:46.919 04:54:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:46.919 1+0 records in 00:04:46.919 1+0 records out 00:04:46.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630274 s, 166 MB/s 00:04:46.919 04:54:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.919 04:54:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:46.919 04:54:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:46.919 04:54:15 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:46.919 04:54:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:46.919 No valid GPT data, bailing 00:04:46.919 04:54:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:46.919 04:54:16 -- scripts/common.sh@394 -- # pt= 00:04:46.919 04:54:16 -- scripts/common.sh@395 -- # return 1 00:04:46.919 04:54:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:46.919 1+0 records in 00:04:46.919 1+0 records out 00:04:46.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590373 s, 178 MB/s 00:04:46.919 04:54:16 -- spdk/autotest.sh@105 -- # sync 00:04:46.919 04:54:16 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:46.919 04:54:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:46.919 04:54:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:48.835 04:54:17 -- spdk/autotest.sh@111 -- # uname -s 00:04:48.835 04:54:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:48.835 04:54:17 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:48.835 04:54:17 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:49.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.353 Hugepages 00:04:49.353 node hugesize free / total 00:04:49.353 node0 1048576kB 0 / 0 00:04:49.353 node0 2048kB 0 / 0 00:04:49.353 00:04:49.353 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:49.612 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:49.612 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:49.612 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:49.612 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:49.612 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:49.612 04:54:18 -- spdk/autotest.sh@117 -- # uname -s 00:04:49.871 04:54:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:49.871 04:54:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:49.871 04:54:18 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:50.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.698 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.698 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.698 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.698 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.698 04:54:19 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:51.632 04:54:20 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:51.632 04:54:20 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:51.632 04:54:20 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:51.632 04:54:20 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:51.632 04:54:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:51.632 04:54:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:51.632 04:54:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:51.632 04:54:20 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:51.632 04:54:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:51.890 04:54:20 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:51.890 04:54:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:51.890 04:54:20 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.148 Waiting for block devices as requested 00:04:52.148 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:52.148 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:52.405 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:52.405 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:57.694 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:57.694 04:54:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:57.694 04:54:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:57.694 04:54:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:57.694 04:54:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:57.694 04:54:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:57.694 04:54:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:57.694 04:54:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:57.694 04:54:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:57.694 04:54:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:57.694 04:54:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:57.694 04:54:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:57.694 04:54:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1543 -- # continue 00:04:57.694 04:54:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:57.694 04:54:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:57.694 04:54:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:57.694 04:54:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:57.694 04:54:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:57.694 04:54:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:57.694 04:54:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:57.694 04:54:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:57.694 04:54:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:57.694 04:54:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:57.694 04:54:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:57.694 04:54:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1543 -- # continue 00:04:57.694 04:54:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:57.694 04:54:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:57.694 04:54:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:57.694 04:54:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:57.694 04:54:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:57.694 04:54:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:57.694 04:54:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:57.694 04:54:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1543 -- # continue 00:04:57.694 04:54:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:57.694 04:54:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:57.694 04:54:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:57.694 04:54:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:57.694 04:54:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:57.694 04:54:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:57.694 04:54:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:57.694 04:54:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:57.694 04:54:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:57.694 04:54:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:57.694 04:54:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:57.694 04:54:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:57.694 04:54:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:57.694 04:54:26 -- common/autotest_common.sh@1543 -- # continue 00:04:57.694 04:54:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:57.694 04:54:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.694 04:54:26 -- common/autotest_common.sh@10 -- # set +x 00:04:57.694 04:54:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:57.694 04:54:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.694 04:54:26 -- common/autotest_common.sh@10 -- # set +x 00:04:57.694 04:54:26 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.573 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.573 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.573 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.573 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.834 04:54:27 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:58.834 04:54:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.834 04:54:27 -- common/autotest_common.sh@10 -- # set +x 00:04:58.834 04:54:27 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:58.834 04:54:27 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:58.834 04:54:27 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.834 04:54:27 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:58.834 04:54:27 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:58.834 04:54:27 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:58.834 04:54:27 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:58.834 04:54:27 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:58.834 04:54:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:58.834 04:54:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:58.834 04:54:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.834 04:54:27 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:58.834 04:54:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:58.834 04:54:28 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:58.834 04:54:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:58.834 04:54:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:58.835 04:54:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:58.835 04:54:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:58.835 04:54:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.835 04:54:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:58.835 04:54:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:58.835 04:54:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:58.835 04:54:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.835 04:54:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:58.835 04:54:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:58.835 04:54:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:58.835 04:54:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.835 04:54:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:58.835 04:54:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:58.835 04:54:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:58.835 04:54:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.835 04:54:28 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:58.835 04:54:28 -- common/autotest_common.sh@1572 -- # return 0 00:04:58.835 04:54:28 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:58.835 04:54:28 -- common/autotest_common.sh@1580 -- # return 0 00:04:58.835 04:54:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:58.835 04:54:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:58.835 04:54:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.835 04:54:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.835 04:54:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:58.835 04:54:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.835 04:54:28 -- common/autotest_common.sh@10 -- # set +x 00:04:58.835 04:54:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:58.835 04:54:28 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.835 04:54:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.835 04:54:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.835 04:54:28 -- common/autotest_common.sh@10 -- # set +x 00:04:58.835 ************************************ 00:04:58.835 START TEST env 00:04:58.835 ************************************ 00:04:58.835 04:54:28 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:59.094 * Looking for test storage... 00:04:59.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:59.095 04:54:28 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.095 04:54:28 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.095 04:54:28 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.095 04:54:28 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.095 04:54:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.095 04:54:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.095 04:54:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.095 04:54:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.095 04:54:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.095 04:54:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.095 04:54:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.095 04:54:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.095 04:54:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.095 04:54:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.095 04:54:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.095 04:54:28 env -- scripts/common.sh@344 -- # case "$op" in 00:04:59.095 04:54:28 env -- scripts/common.sh@345 -- # : 1 00:04:59.095 04:54:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.095 04:54:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.095 04:54:28 env -- scripts/common.sh@365 -- # decimal 1 00:04:59.095 04:54:28 env -- scripts/common.sh@353 -- # local d=1 00:04:59.095 04:54:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.095 04:54:28 env -- scripts/common.sh@355 -- # echo 1 00:04:59.095 04:54:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.095 04:54:28 env -- scripts/common.sh@366 -- # decimal 2 00:04:59.095 04:54:28 env -- scripts/common.sh@353 -- # local d=2 00:04:59.095 04:54:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.095 04:54:28 env -- scripts/common.sh@355 -- # echo 2 00:04:59.095 04:54:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.095 04:54:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.095 04:54:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.095 04:54:28 env -- scripts/common.sh@368 -- # return 0 00:04:59.095 04:54:28 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.095 04:54:28 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.095 --rc genhtml_branch_coverage=1 00:04:59.095 --rc genhtml_function_coverage=1 00:04:59.095 --rc genhtml_legend=1 00:04:59.095 --rc geninfo_all_blocks=1 00:04:59.095 --rc geninfo_unexecuted_blocks=1 00:04:59.095 00:04:59.095 ' 00:04:59.095 04:54:28 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.095 --rc genhtml_branch_coverage=1 00:04:59.095 --rc genhtml_function_coverage=1 00:04:59.095 --rc genhtml_legend=1 00:04:59.095 --rc geninfo_all_blocks=1 00:04:59.095 --rc geninfo_unexecuted_blocks=1 00:04:59.095 00:04:59.095 ' 00:04:59.095 04:54:28 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.095 --rc genhtml_branch_coverage=1 00:04:59.095 --rc genhtml_function_coverage=1 00:04:59.095 --rc genhtml_legend=1 00:04:59.095 --rc geninfo_all_blocks=1 00:04:59.095 --rc geninfo_unexecuted_blocks=1 00:04:59.095 00:04:59.095 ' 00:04:59.095 04:54:28 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.095 --rc genhtml_branch_coverage=1 00:04:59.095 --rc genhtml_function_coverage=1 00:04:59.095 --rc genhtml_legend=1 00:04:59.095 --rc geninfo_all_blocks=1 00:04:59.095 --rc geninfo_unexecuted_blocks=1 00:04:59.095 00:04:59.095 ' 00:04:59.095 04:54:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:59.095 04:54:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.095 04:54:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.095 04:54:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.095 ************************************ 00:04:59.095 START TEST env_memory 00:04:59.095 ************************************ 00:04:59.095 04:54:28 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:59.095 00:04:59.095 00:04:59.095 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.095 http://cunit.sourceforge.net/ 00:04:59.095 00:04:59.095 00:04:59.095 Suite: memory 00:04:59.095 Test: alloc and free memory map ...[2024-11-28 04:54:28.253487] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:59.095 passed 00:04:59.095 Test: mem map translation ...[2024-11-28 04:54:28.283404] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:59.095 [2024-11-28 04:54:28.283440] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:59.095 [2024-11-28 04:54:28.283482] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:59.095 [2024-11-28 04:54:28.283494] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:59.095 passed 00:04:59.095 Test: mem map registration ...[2024-11-28 04:54:28.333930] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:59.095 [2024-11-28 04:54:28.333960] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:59.095 passed 00:04:59.354 Test: mem map adjacent registrations ...passed 00:04:59.354 00:04:59.354 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.354 suites 1 1 n/a 0 0 00:04:59.354 tests 4 4 4 0 0 00:04:59.354 asserts 152 152 152 0 n/a 00:04:59.354 00:04:59.354 Elapsed time = 0.175 seconds 00:04:59.354 00:04:59.354 real 0m0.206s 00:04:59.354 user 0m0.184s 00:04:59.354 sys 0m0.017s 00:04:59.354 ************************************ 00:04:59.354 END TEST env_memory 00:04:59.354 ************************************ 00:04:59.354 04:54:28 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.354 04:54:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:59.354 04:54:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.354 04:54:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.354 04:54:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.354 04:54:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.354 ************************************ 00:04:59.354 START TEST env_vtophys 00:04:59.354 ************************************ 00:04:59.354 04:54:28 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.354 EAL: lib.eal log level changed from notice to debug 00:04:59.354 EAL: Detected lcore 0 as core 0 on socket 0 00:04:59.354 EAL: Detected lcore 1 as core 0 on socket 0 00:04:59.354 EAL: Detected lcore 2 as core 0 on socket 0 00:04:59.354 EAL: Detected lcore 3 as core 0 on socket 0 00:04:59.354 EAL: Detected lcore 4 as core 0 on socket 0 00:04:59.354 EAL: Detected lcore 5 as core 0 on socket 0 00:04:59.354 EAL: Detected lcore 6 as core 0 on socket 0 00:04:59.354 EAL: Detected lcore 7 as core 0 on socket 0 00:04:59.354 EAL: Detected lcore 8 as core 0 on socket 0 00:04:59.354 EAL: Detected lcore 9 as core 0 on socket 0 00:04:59.354 EAL: Maximum logical cores by configuration: 128 00:04:59.354 EAL: Detected CPU lcores: 10 00:04:59.354 EAL: Detected NUMA nodes: 1 00:04:59.354 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:59.354 EAL: Detected shared linkage of DPDK 00:04:59.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:59.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:59.354 EAL: Registered [vdev] bus. 00:04:59.354 EAL: bus.vdev log level changed from disabled to notice 00:04:59.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:59.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:59.354 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:59.354 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:59.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:59.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:59.354 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:59.355 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:59.355 EAL: No shared files mode enabled, IPC will be disabled 00:04:59.355 EAL: No shared files mode enabled, IPC is disabled 00:04:59.355 EAL: Selected IOVA mode 'PA' 00:04:59.355 EAL: Probing VFIO support... 00:04:59.355 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:59.355 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:59.355 EAL: Ask a virtual area of 0x2e000 bytes 00:04:59.355 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:59.355 EAL: Setting up physically contiguous memory... 00:04:59.355 EAL: Setting maximum number of open files to 524288 00:04:59.355 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:59.355 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:59.355 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.355 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:59.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.355 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.355 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:59.355 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:59.355 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.355 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:59.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.355 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.355 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:59.355 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:59.355 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.355 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:59.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.355 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.355 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:59.355 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:59.355 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.355 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:59.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.355 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.355 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:59.355 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:59.355 EAL: Hugepages will be freed exactly as allocated. 00:04:59.355 EAL: No shared files mode enabled, IPC is disabled 00:04:59.355 EAL: No shared files mode enabled, IPC is disabled 00:04:59.355 EAL: TSC frequency is ~2600000 KHz 00:04:59.355 EAL: Main lcore 0 is ready (tid=7fa7f28eca40;cpuset=[0]) 00:04:59.355 EAL: Trying to obtain current memory policy. 00:04:59.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.355 EAL: Restoring previous memory policy: 0 00:04:59.355 EAL: request: mp_malloc_sync 00:04:59.355 EAL: No shared files mode enabled, IPC is disabled 00:04:59.355 EAL: Heap on socket 0 was expanded by 2MB 00:04:59.355 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:59.355 EAL: No shared files mode enabled, IPC is disabled 00:04:59.355 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:59.355 EAL: Mem event callback 'spdk:(nil)' registered 00:04:59.355 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:59.616 00:04:59.616 00:04:59.616 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.616 http://cunit.sourceforge.net/ 00:04:59.616 00:04:59.616 00:04:59.616 Suite: components_suite 00:04:59.876 Test: vtophys_malloc_test ...passed 00:04:59.876 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:59.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.876 EAL: Restoring previous memory policy: 4 00:04:59.876 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.876 EAL: request: mp_malloc_sync 00:04:59.876 EAL: No shared files mode enabled, IPC is disabled 00:04:59.876 EAL: Heap on socket 0 was expanded by 4MB 00:04:59.876 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.876 EAL: request: mp_malloc_sync 00:04:59.876 EAL: No shared files mode enabled, IPC is disabled 00:04:59.876 EAL: Heap on socket 0 was shrunk by 4MB 00:04:59.876 EAL: Trying to obtain current memory policy. 00:04:59.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.876 EAL: Restoring previous memory policy: 4 00:04:59.876 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.876 EAL: request: mp_malloc_sync 00:04:59.876 EAL: No shared files mode enabled, IPC is disabled 00:04:59.876 EAL: Heap on socket 0 was expanded by 6MB 00:04:59.876 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.876 EAL: request: mp_malloc_sync 00:04:59.876 EAL: No shared files mode enabled, IPC is disabled 00:04:59.876 EAL: Heap on socket 0 was shrunk by 6MB 00:04:59.876 EAL: Trying to obtain current memory policy. 00:04:59.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.876 EAL: Restoring previous memory policy: 4 00:04:59.876 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.876 EAL: request: mp_malloc_sync 00:04:59.876 EAL: No shared files mode enabled, IPC is disabled 00:04:59.876 EAL: Heap on socket 0 was expanded by 10MB 00:04:59.876 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.876 EAL: request: mp_malloc_sync 00:04:59.876 EAL: No shared files mode enabled, IPC is disabled 00:04:59.876 EAL: Heap on socket 0 was shrunk by 10MB 00:04:59.876 EAL: Trying to obtain current memory policy. 00:04:59.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.876 EAL: Restoring previous memory policy: 4 00:04:59.876 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.877 EAL: request: mp_malloc_sync 00:04:59.877 EAL: No shared files mode enabled, IPC is disabled 00:04:59.877 EAL: Heap on socket 0 was expanded by 18MB 00:04:59.877 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.877 EAL: request: mp_malloc_sync 00:04:59.877 EAL: No shared files mode enabled, IPC is disabled 00:04:59.877 EAL: Heap on socket 0 was shrunk by 18MB 00:04:59.877 EAL: Trying to obtain current memory policy. 00:04:59.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.877 EAL: Restoring previous memory policy: 4 00:04:59.877 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.877 EAL: request: mp_malloc_sync 00:04:59.877 EAL: No shared files mode enabled, IPC is disabled 00:04:59.877 EAL: Heap on socket 0 was expanded by 34MB 00:04:59.877 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.877 EAL: request: mp_malloc_sync 00:04:59.877 EAL: No shared files mode enabled, IPC is disabled 00:04:59.877 EAL: Heap on socket 0 was shrunk by 34MB 00:04:59.877 EAL: Trying to obtain current memory policy. 00:04:59.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.877 EAL: Restoring previous memory policy: 4 00:04:59.877 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.877 EAL: request: mp_malloc_sync 00:04:59.877 EAL: No shared files mode enabled, IPC is disabled 00:04:59.877 EAL: Heap on socket 0 was expanded by 66MB 00:04:59.877 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.877 EAL: request: mp_malloc_sync 00:04:59.877 EAL: No shared files mode enabled, IPC is disabled 00:04:59.877 EAL: Heap on socket 0 was shrunk by 66MB 00:04:59.877 EAL: Trying to obtain current memory policy. 00:04:59.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.877 EAL: Restoring previous memory policy: 4 00:04:59.877 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.877 EAL: request: mp_malloc_sync 00:04:59.877 EAL: No shared files mode enabled, IPC is disabled 00:04:59.877 EAL: Heap on socket 0 was expanded by 130MB 00:04:59.877 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.877 EAL: request: mp_malloc_sync 00:04:59.877 EAL: No shared files mode enabled, IPC is disabled 00:04:59.877 EAL: Heap on socket 0 was shrunk by 130MB 00:04:59.877 EAL: Trying to obtain current memory policy. 00:04:59.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.877 EAL: Restoring previous memory policy: 4 00:04:59.877 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.877 EAL: request: mp_malloc_sync 00:04:59.877 EAL: No shared files mode enabled, IPC is disabled 00:04:59.877 EAL: Heap on socket 0 was expanded by 258MB 00:05:00.138 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.138 EAL: request: mp_malloc_sync 00:05:00.138 EAL: No shared files mode enabled, IPC is disabled 00:05:00.138 EAL: Heap on socket 0 was shrunk by 258MB 00:05:00.138 EAL: Trying to obtain current memory policy. 00:05:00.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.138 EAL: Restoring previous memory policy: 4 00:05:00.138 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.138 EAL: request: mp_malloc_sync 00:05:00.138 EAL: No shared files mode enabled, IPC is disabled 00:05:00.138 EAL: Heap on socket 0 was expanded by 514MB 00:05:00.138 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.400 EAL: request: mp_malloc_sync 00:05:00.400 EAL: No shared files mode enabled, IPC is disabled 00:05:00.400 EAL: Heap on socket 0 was shrunk by 514MB 00:05:00.400 EAL: Trying to obtain current memory policy. 00:05:00.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.400 EAL: Restoring previous memory policy: 4 00:05:00.400 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.400 EAL: request: mp_malloc_sync 00:05:00.400 EAL: No shared files mode enabled, IPC is disabled 00:05:00.400 EAL: Heap on socket 0 was expanded by 1026MB 00:05:00.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.661 passed 00:05:00.661 00:05:00.661 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.661 suites 1 1 n/a 0 0 00:05:00.661 tests 2 2 2 0 0 00:05:00.661 asserts 5568 5568 5568 0 n/a 00:05:00.661 00:05:00.661 Elapsed time = 1.224 seconds 00:05:00.661 EAL: request: mp_malloc_sync 00:05:00.661 EAL: No shared files mode enabled, IPC is disabled 00:05:00.661 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:00.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.661 EAL: request: mp_malloc_sync 00:05:00.661 EAL: No shared files mode enabled, IPC is disabled 00:05:00.661 EAL: Heap on socket 0 was shrunk by 2MB 00:05:00.661 EAL: No shared files mode enabled, IPC is disabled 00:05:00.661 EAL: No shared files mode enabled, IPC is disabled 00:05:00.661 EAL: No shared files mode enabled, IPC is disabled 00:05:00.661 00:05:00.661 real 0m1.443s 00:05:00.661 user 0m0.582s 00:05:00.661 sys 0m0.726s 00:05:00.661 04:54:29 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.661 ************************************ 00:05:00.661 END TEST env_vtophys 00:05:00.661 ************************************ 00:05:00.661 04:54:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:00.920 04:54:29 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.920 04:54:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.920 04:54:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.920 04:54:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.920 ************************************ 00:05:00.920 START TEST env_pci 00:05:00.921 ************************************ 00:05:00.921 04:54:29 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.921 00:05:00.921 00:05:00.921 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.921 http://cunit.sourceforge.net/ 00:05:00.921 00:05:00.921 00:05:00.921 Suite: pci 00:05:00.921 Test: pci_hook ...[2024-11-28 04:54:29.973321] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68954 has claimed it 00:05:00.921 passed 00:05:00.921 00:05:00.921 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.921 suites 1 1 n/a 0 0 00:05:00.921 tests 1 1 1 0 0 00:05:00.921 asserts 25 25 25 0 n/a 00:05:00.921 00:05:00.921 Elapsed time = 0.004 seconds 00:05:00.921 EAL: Cannot find device (10000:00:01.0) 00:05:00.921 EAL: Failed to attach device on primary process 00:05:00.921 00:05:00.921 real 0m0.048s 00:05:00.921 user 0m0.017s 00:05:00.921 sys 0m0.030s 00:05:00.921 04:54:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.921 ************************************ 00:05:00.921 END TEST env_pci 00:05:00.921 ************************************ 00:05:00.921 04:54:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:00.921 04:54:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:00.921 04:54:30 env -- env/env.sh@15 -- # uname 00:05:00.921 04:54:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:00.921 04:54:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:00.921 04:54:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.921 04:54:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:00.921 04:54:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.921 04:54:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.921 ************************************ 00:05:00.921 START TEST env_dpdk_post_init 00:05:00.921 ************************************ 00:05:00.921 04:54:30 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.921 EAL: Detected CPU lcores: 10 00:05:00.921 EAL: Detected NUMA nodes: 1 00:05:00.921 EAL: Detected shared linkage of DPDK 00:05:00.921 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.921 EAL: Selected IOVA mode 'PA' 00:05:00.921 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.181 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:01.181 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:01.181 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:01.181 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:01.181 Starting DPDK initialization... 00:05:01.181 Starting SPDK post initialization... 00:05:01.181 SPDK NVMe probe 00:05:01.181 Attaching to 0000:00:10.0 00:05:01.181 Attaching to 0000:00:11.0 00:05:01.181 Attaching to 0000:00:12.0 00:05:01.181 Attaching to 0000:00:13.0 00:05:01.181 Attached to 0000:00:10.0 00:05:01.181 Attached to 0000:00:11.0 00:05:01.181 Attached to 0000:00:13.0 00:05:01.181 Attached to 0000:00:12.0 00:05:01.181 Cleaning up... 00:05:01.181 00:05:01.181 real 0m0.225s 00:05:01.181 user 0m0.078s 00:05:01.181 sys 0m0.050s 00:05:01.181 ************************************ 00:05:01.181 END TEST env_dpdk_post_init 00:05:01.181 ************************************ 00:05:01.181 04:54:30 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.181 04:54:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.181 04:54:30 env -- env/env.sh@26 -- # uname 00:05:01.181 04:54:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:01.181 04:54:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.181 04:54:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.181 04:54:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.181 04:54:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.181 ************************************ 00:05:01.181 START TEST env_mem_callbacks 00:05:01.181 ************************************ 00:05:01.181 04:54:30 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.181 EAL: Detected CPU lcores: 10 00:05:01.181 EAL: Detected NUMA nodes: 1 00:05:01.181 EAL: Detected shared linkage of DPDK 00:05:01.181 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.181 EAL: Selected IOVA mode 'PA' 00:05:01.440 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.440 00:05:01.440 00:05:01.440 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.440 http://cunit.sourceforge.net/ 00:05:01.440 00:05:01.440 00:05:01.440 Suite: memory 00:05:01.440 Test: test ... 00:05:01.440 register 0x200000200000 2097152 00:05:01.440 malloc 3145728 00:05:01.440 register 0x200000400000 4194304 00:05:01.440 buf 0x200000500000 len 3145728 PASSED 00:05:01.440 malloc 64 00:05:01.440 buf 0x2000004fff40 len 64 PASSED 00:05:01.440 malloc 4194304 00:05:01.440 register 0x200000800000 6291456 00:05:01.440 buf 0x200000a00000 len 4194304 PASSED 00:05:01.440 free 0x200000500000 3145728 00:05:01.440 free 0x2000004fff40 64 00:05:01.440 unregister 0x200000400000 4194304 PASSED 00:05:01.440 free 0x200000a00000 4194304 00:05:01.440 unregister 0x200000800000 6291456 PASSED 00:05:01.440 malloc 8388608 00:05:01.440 register 0x200000400000 10485760 00:05:01.440 buf 0x200000600000 len 8388608 PASSED 00:05:01.440 free 0x200000600000 8388608 00:05:01.440 unregister 0x200000400000 10485760 PASSED 00:05:01.440 passed 00:05:01.440 00:05:01.440 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.440 suites 1 1 n/a 0 0 00:05:01.440 tests 1 1 1 0 0 00:05:01.440 asserts 15 15 15 0 n/a 00:05:01.440 00:05:01.440 Elapsed time = 0.012 seconds 00:05:01.440 00:05:01.440 real 0m0.160s 00:05:01.440 user 0m0.021s 00:05:01.440 sys 0m0.037s 00:05:01.440 04:54:30 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.440 04:54:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:01.440 ************************************ 00:05:01.440 END TEST env_mem_callbacks 00:05:01.440 ************************************ 00:05:01.440 00:05:01.440 real 0m2.515s 00:05:01.440 user 0m1.027s 00:05:01.440 sys 0m1.085s 00:05:01.440 ************************************ 00:05:01.440 END TEST env 00:05:01.440 ************************************ 00:05:01.440 04:54:30 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.440 04:54:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.440 04:54:30 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.440 04:54:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.440 04:54:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.440 04:54:30 -- common/autotest_common.sh@10 -- # set +x 00:05:01.440 ************************************ 00:05:01.440 START TEST rpc 00:05:01.440 ************************************ 00:05:01.440 04:54:30 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.440 * Looking for test storage... 00:05:01.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.440 04:54:30 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.440 04:54:30 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.440 04:54:30 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.701 04:54:30 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.701 04:54:30 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.701 04:54:30 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.701 04:54:30 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.701 04:54:30 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.701 04:54:30 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.701 04:54:30 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.701 04:54:30 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.701 04:54:30 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.701 04:54:30 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.701 04:54:30 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.701 04:54:30 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.701 04:54:30 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.701 04:54:30 rpc -- scripts/common.sh@345 -- # : 1 00:05:01.701 04:54:30 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.701 04:54:30 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.701 04:54:30 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.701 04:54:30 rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.701 04:54:30 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.701 04:54:30 rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.701 04:54:30 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.701 04:54:30 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.701 04:54:30 rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.701 04:54:30 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.701 04:54:30 rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.701 04:54:30 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.701 04:54:30 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.701 04:54:30 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.701 04:54:30 rpc -- scripts/common.sh@368 -- # return 0 00:05:01.701 04:54:30 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.701 04:54:30 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.701 --rc genhtml_branch_coverage=1 00:05:01.701 --rc genhtml_function_coverage=1 00:05:01.701 --rc genhtml_legend=1 00:05:01.701 --rc geninfo_all_blocks=1 00:05:01.701 --rc geninfo_unexecuted_blocks=1 00:05:01.701 00:05:01.701 ' 00:05:01.701 04:54:30 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.701 --rc genhtml_branch_coverage=1 00:05:01.701 --rc genhtml_function_coverage=1 00:05:01.701 --rc genhtml_legend=1 00:05:01.701 --rc geninfo_all_blocks=1 00:05:01.701 --rc geninfo_unexecuted_blocks=1 00:05:01.701 00:05:01.701 ' 00:05:01.701 04:54:30 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.701 --rc genhtml_branch_coverage=1 00:05:01.701 --rc genhtml_function_coverage=1 00:05:01.701 --rc genhtml_legend=1 00:05:01.701 --rc geninfo_all_blocks=1 00:05:01.701 --rc geninfo_unexecuted_blocks=1 00:05:01.701 00:05:01.701 ' 00:05:01.701 04:54:30 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.701 --rc genhtml_branch_coverage=1 00:05:01.701 --rc genhtml_function_coverage=1 00:05:01.701 --rc genhtml_legend=1 00:05:01.701 --rc geninfo_all_blocks=1 00:05:01.701 --rc geninfo_unexecuted_blocks=1 00:05:01.701 00:05:01.701 ' 00:05:01.701 04:54:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69076 00:05:01.701 04:54:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.701 04:54:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69076 00:05:01.701 04:54:30 rpc -- common/autotest_common.sh@835 -- # '[' -z 69076 ']' 00:05:01.701 04:54:30 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.701 04:54:30 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.702 04:54:30 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.702 04:54:30 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.702 04:54:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.702 04:54:30 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:01.702 [2024-11-28 04:54:30.833992] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:01.702 [2024-11-28 04:54:30.834114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69076 ] 00:05:01.702 [2024-11-28 04:54:30.976539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.962 [2024-11-28 04:54:31.007360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:01.962 [2024-11-28 04:54:31.007428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69076' to capture a snapshot of events at runtime. 00:05:01.963 [2024-11-28 04:54:31.007443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:01.963 [2024-11-28 04:54:31.007452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:01.963 [2024-11-28 04:54:31.007469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69076 for offline analysis/debug. 00:05:01.963 [2024-11-28 04:54:31.007881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.532 04:54:31 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.532 04:54:31 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.532 04:54:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.532 04:54:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.532 04:54:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:02.532 04:54:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:02.532 04:54:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.532 04:54:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.532 04:54:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.532 ************************************ 00:05:02.532 START TEST rpc_integrity 00:05:02.532 ************************************ 00:05:02.532 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:02.532 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.532 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.532 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.532 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.532 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.532 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.532 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.532 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.532 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.532 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.532 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.532 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:02.532 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.532 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.532 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.532 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.532 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:02.532 { 00:05:02.532 "name": "Malloc0", 00:05:02.532 "aliases": [ 00:05:02.532 "cb0b7fec-02fd-4c78-9e88-f5c843115ad1" 00:05:02.532 ], 00:05:02.532 "product_name": "Malloc disk", 00:05:02.532 "block_size": 512, 00:05:02.532 "num_blocks": 16384, 00:05:02.532 "uuid": "cb0b7fec-02fd-4c78-9e88-f5c843115ad1", 00:05:02.532 "assigned_rate_limits": { 00:05:02.532 "rw_ios_per_sec": 0, 00:05:02.532 "rw_mbytes_per_sec": 0, 00:05:02.532 "r_mbytes_per_sec": 0, 00:05:02.532 "w_mbytes_per_sec": 0 00:05:02.532 }, 00:05:02.532 "claimed": false, 00:05:02.532 "zoned": false, 00:05:02.532 "supported_io_types": { 00:05:02.532 "read": true, 00:05:02.532 "write": true, 00:05:02.532 "unmap": true, 00:05:02.532 "flush": true, 00:05:02.532 "reset": true, 00:05:02.532 "nvme_admin": false, 00:05:02.532 "nvme_io": false, 00:05:02.532 "nvme_io_md": false, 00:05:02.532 "write_zeroes": true, 00:05:02.532 "zcopy": true, 00:05:02.532 "get_zone_info": false, 00:05:02.532 "zone_management": false, 00:05:02.532 "zone_append": false, 00:05:02.532 "compare": false, 00:05:02.532 "compare_and_write": false, 00:05:02.532 "abort": true, 00:05:02.532 "seek_hole": false, 00:05:02.532 "seek_data": false, 00:05:02.532 "copy": true, 00:05:02.532 "nvme_iov_md": false 00:05:02.532 }, 00:05:02.532 "memory_domains": [ 00:05:02.532 { 00:05:02.532 "dma_device_id": "system", 00:05:02.532 "dma_device_type": 1 00:05:02.532 }, 00:05:02.532 { 00:05:02.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.532 "dma_device_type": 2 00:05:02.532 } 00:05:02.532 ], 00:05:02.532 "driver_specific": {} 00:05:02.532 } 00:05:02.532 ]' 00:05:02.532 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:02.794 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:02.794 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:02.794 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.794 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.794 [2024-11-28 04:54:31.830050] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:02.794 [2024-11-28 04:54:31.830134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.794 [2024-11-28 04:54:31.830168] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:02.794 [2024-11-28 04:54:31.830193] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.794 [2024-11-28 04:54:31.832717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.794 [2024-11-28 04:54:31.832773] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:02.794 Passthru0 00:05:02.794 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.794 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:02.794 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.794 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.794 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.794 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:02.794 { 00:05:02.794 "name": "Malloc0", 00:05:02.794 "aliases": [ 00:05:02.794 "cb0b7fec-02fd-4c78-9e88-f5c843115ad1" 00:05:02.794 ], 00:05:02.794 "product_name": "Malloc disk", 00:05:02.794 "block_size": 512, 00:05:02.794 "num_blocks": 16384, 00:05:02.794 "uuid": "cb0b7fec-02fd-4c78-9e88-f5c843115ad1", 00:05:02.794 "assigned_rate_limits": { 00:05:02.794 "rw_ios_per_sec": 0, 00:05:02.794 "rw_mbytes_per_sec": 0, 00:05:02.794 "r_mbytes_per_sec": 0, 00:05:02.794 "w_mbytes_per_sec": 0 00:05:02.794 }, 00:05:02.794 "claimed": true, 00:05:02.794 "claim_type": "exclusive_write", 00:05:02.794 "zoned": false, 00:05:02.794 "supported_io_types": { 00:05:02.794 "read": true, 00:05:02.794 "write": true, 00:05:02.794 "unmap": true, 00:05:02.794 "flush": true, 00:05:02.794 "reset": true, 00:05:02.794 "nvme_admin": false, 00:05:02.794 "nvme_io": false, 00:05:02.794 "nvme_io_md": false, 00:05:02.794 "write_zeroes": true, 00:05:02.794 "zcopy": true, 00:05:02.794 "get_zone_info": false, 00:05:02.794 "zone_management": false, 00:05:02.794 "zone_append": false, 00:05:02.794 "compare": false, 00:05:02.794 "compare_and_write": false, 00:05:02.794 "abort": true, 00:05:02.794 "seek_hole": false, 00:05:02.794 "seek_data": false, 00:05:02.794 "copy": true, 00:05:02.794 "nvme_iov_md": false 00:05:02.794 }, 00:05:02.794 "memory_domains": [ 00:05:02.794 { 00:05:02.794 "dma_device_id": "system", 00:05:02.794 "dma_device_type": 1 00:05:02.794 }, 00:05:02.794 { 00:05:02.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.794 "dma_device_type": 2 00:05:02.794 } 00:05:02.794 ], 00:05:02.794 "driver_specific": {} 00:05:02.794 }, 00:05:02.794 { 00:05:02.794 "name": "Passthru0", 00:05:02.794 "aliases": [ 00:05:02.794 "5bee34e0-c90a-58b4-aeb3-b1e5327bc7e7" 00:05:02.794 ], 00:05:02.794 "product_name": "passthru", 00:05:02.794 "block_size": 512, 00:05:02.794 "num_blocks": 16384, 00:05:02.794 "uuid": "5bee34e0-c90a-58b4-aeb3-b1e5327bc7e7", 00:05:02.794 "assigned_rate_limits": { 00:05:02.794 "rw_ios_per_sec": 0, 00:05:02.794 "rw_mbytes_per_sec": 0, 00:05:02.794 "r_mbytes_per_sec": 0, 00:05:02.794 "w_mbytes_per_sec": 0 00:05:02.794 }, 00:05:02.794 "claimed": false, 00:05:02.794 "zoned": false, 00:05:02.794 "supported_io_types": { 00:05:02.794 "read": true, 00:05:02.794 "write": true, 00:05:02.794 "unmap": true, 00:05:02.794 "flush": true, 00:05:02.794 "reset": true, 00:05:02.794 "nvme_admin": false, 00:05:02.794 "nvme_io": false, 00:05:02.794 "nvme_io_md": false, 00:05:02.794 "write_zeroes": true, 00:05:02.794 "zcopy": true, 00:05:02.794 "get_zone_info": false, 00:05:02.794 "zone_management": false, 00:05:02.794 "zone_append": false, 00:05:02.794 "compare": false, 00:05:02.794 "compare_and_write": false, 00:05:02.794 "abort": true, 00:05:02.794 "seek_hole": false, 00:05:02.794 "seek_data": false, 00:05:02.794 "copy": true, 00:05:02.794 "nvme_iov_md": false 00:05:02.794 }, 00:05:02.794 "memory_domains": [ 00:05:02.794 { 00:05:02.794 "dma_device_id": "system", 00:05:02.794 "dma_device_type": 1 00:05:02.794 }, 00:05:02.794 { 00:05:02.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.794 "dma_device_type": 2 00:05:02.794 } 00:05:02.794 ], 00:05:02.795 "driver_specific": { 00:05:02.795 "passthru": { 00:05:02.795 "name": "Passthru0", 00:05:02.795 "base_bdev_name": "Malloc0" 00:05:02.795 } 00:05:02.795 } 00:05:02.795 } 00:05:02.795 ]' 00:05:02.795 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:02.795 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.795 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.795 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.795 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.795 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.795 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:02.795 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.795 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.795 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.795 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:02.795 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.795 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.795 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.795 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:02.795 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:02.795 04:54:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:02.795 00:05:02.795 real 0m0.230s 00:05:02.795 user 0m0.124s 00:05:02.795 sys 0m0.039s 00:05:02.795 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.795 04:54:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.795 ************************************ 00:05:02.795 END TEST rpc_integrity 00:05:02.795 ************************************ 00:05:02.795 04:54:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:02.795 04:54:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.795 04:54:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.795 04:54:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.795 ************************************ 00:05:02.795 START TEST rpc_plugins 00:05:02.795 ************************************ 00:05:02.795 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:02.795 04:54:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:02.795 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.795 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.795 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.795 04:54:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:02.795 04:54:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:02.795 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.795 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.795 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.795 04:54:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:02.795 { 00:05:02.795 "name": "Malloc1", 00:05:02.795 "aliases": [ 00:05:02.795 "f756f96a-ecf1-4e00-ab9f-0c29f27c5621" 00:05:02.795 ], 00:05:02.795 "product_name": "Malloc disk", 00:05:02.795 "block_size": 4096, 00:05:02.795 "num_blocks": 256, 00:05:02.795 "uuid": "f756f96a-ecf1-4e00-ab9f-0c29f27c5621", 00:05:02.795 "assigned_rate_limits": { 00:05:02.795 "rw_ios_per_sec": 0, 00:05:02.795 "rw_mbytes_per_sec": 0, 00:05:02.795 "r_mbytes_per_sec": 0, 00:05:02.795 "w_mbytes_per_sec": 0 00:05:02.795 }, 00:05:02.795 "claimed": false, 00:05:02.795 "zoned": false, 00:05:02.795 "supported_io_types": { 00:05:02.795 "read": true, 00:05:02.795 "write": true, 00:05:02.795 "unmap": true, 00:05:02.795 "flush": true, 00:05:02.795 "reset": true, 00:05:02.795 "nvme_admin": false, 00:05:02.795 "nvme_io": false, 00:05:02.795 "nvme_io_md": false, 00:05:02.795 "write_zeroes": true, 00:05:02.795 "zcopy": true, 00:05:02.795 "get_zone_info": false, 00:05:02.795 "zone_management": false, 00:05:02.795 "zone_append": false, 00:05:02.795 "compare": false, 00:05:02.795 "compare_and_write": false, 00:05:02.795 "abort": true, 00:05:02.795 "seek_hole": false, 00:05:02.795 "seek_data": false, 00:05:02.795 "copy": true, 00:05:02.795 "nvme_iov_md": false 00:05:02.795 }, 00:05:02.795 "memory_domains": [ 00:05:02.795 { 00:05:02.795 "dma_device_id": "system", 00:05:02.795 "dma_device_type": 1 00:05:02.795 }, 00:05:02.795 { 00:05:02.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.795 "dma_device_type": 2 00:05:02.795 } 00:05:02.795 ], 00:05:02.795 "driver_specific": {} 00:05:02.795 } 00:05:02.795 ]' 00:05:02.795 04:54:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:02.795 04:54:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:02.795 04:54:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:02.795 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.795 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.056 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.056 04:54:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:03.056 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.056 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.056 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.056 04:54:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:03.056 04:54:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:03.056 04:54:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:03.056 00:05:03.056 real 0m0.121s 00:05:03.056 user 0m0.064s 00:05:03.056 sys 0m0.021s 00:05:03.056 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.056 04:54:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.056 ************************************ 00:05:03.056 END TEST rpc_plugins 00:05:03.056 ************************************ 00:05:03.056 04:54:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:03.056 04:54:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.056 04:54:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.056 04:54:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.056 ************************************ 00:05:03.056 START TEST rpc_trace_cmd_test 00:05:03.056 ************************************ 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:03.056 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69076", 00:05:03.056 "tpoint_group_mask": "0x8", 00:05:03.056 "iscsi_conn": { 00:05:03.056 "mask": "0x2", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "scsi": { 00:05:03.056 "mask": "0x4", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "bdev": { 00:05:03.056 "mask": "0x8", 00:05:03.056 "tpoint_mask": "0xffffffffffffffff" 00:05:03.056 }, 00:05:03.056 "nvmf_rdma": { 00:05:03.056 "mask": "0x10", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "nvmf_tcp": { 00:05:03.056 "mask": "0x20", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "ftl": { 00:05:03.056 "mask": "0x40", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "blobfs": { 00:05:03.056 "mask": "0x80", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "dsa": { 00:05:03.056 "mask": "0x200", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "thread": { 00:05:03.056 "mask": "0x400", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "nvme_pcie": { 00:05:03.056 "mask": "0x800", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "iaa": { 00:05:03.056 "mask": "0x1000", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "nvme_tcp": { 00:05:03.056 "mask": "0x2000", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "bdev_nvme": { 00:05:03.056 "mask": "0x4000", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "sock": { 00:05:03.056 "mask": "0x8000", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "blob": { 00:05:03.056 "mask": "0x10000", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "bdev_raid": { 00:05:03.056 "mask": "0x20000", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 }, 00:05:03.056 "scheduler": { 00:05:03.056 "mask": "0x40000", 00:05:03.056 "tpoint_mask": "0x0" 00:05:03.056 } 00:05:03.056 }' 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:03.056 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:03.318 04:54:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:03.318 00:05:03.318 real 0m0.161s 00:05:03.318 user 0m0.124s 00:05:03.318 sys 0m0.027s 00:05:03.318 04:54:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.318 ************************************ 00:05:03.318 END TEST rpc_trace_cmd_test 00:05:03.318 ************************************ 00:05:03.318 04:54:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.318 04:54:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:03.318 04:54:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:03.318 04:54:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:03.318 04:54:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.318 04:54:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.318 04:54:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.318 ************************************ 00:05:03.318 START TEST rpc_daemon_integrity 00:05:03.318 ************************************ 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.318 { 00:05:03.318 "name": "Malloc2", 00:05:03.318 "aliases": [ 00:05:03.318 "0de298e8-0373-4a9f-bbb7-5c85ccba12bb" 00:05:03.318 ], 00:05:03.318 "product_name": "Malloc disk", 00:05:03.318 "block_size": 512, 00:05:03.318 "num_blocks": 16384, 00:05:03.318 "uuid": "0de298e8-0373-4a9f-bbb7-5c85ccba12bb", 00:05:03.318 "assigned_rate_limits": { 00:05:03.318 "rw_ios_per_sec": 0, 00:05:03.318 "rw_mbytes_per_sec": 0, 00:05:03.318 "r_mbytes_per_sec": 0, 00:05:03.318 "w_mbytes_per_sec": 0 00:05:03.318 }, 00:05:03.318 "claimed": false, 00:05:03.318 "zoned": false, 00:05:03.318 "supported_io_types": { 00:05:03.318 "read": true, 00:05:03.318 "write": true, 00:05:03.318 "unmap": true, 00:05:03.318 "flush": true, 00:05:03.318 "reset": true, 00:05:03.318 "nvme_admin": false, 00:05:03.318 "nvme_io": false, 00:05:03.318 "nvme_io_md": false, 00:05:03.318 "write_zeroes": true, 00:05:03.318 "zcopy": true, 00:05:03.318 "get_zone_info": false, 00:05:03.318 "zone_management": false, 00:05:03.318 "zone_append": false, 00:05:03.318 "compare": false, 00:05:03.318 "compare_and_write": false, 00:05:03.318 "abort": true, 00:05:03.318 "seek_hole": false, 00:05:03.318 "seek_data": false, 00:05:03.318 "copy": true, 00:05:03.318 "nvme_iov_md": false 00:05:03.318 }, 00:05:03.318 "memory_domains": [ 00:05:03.318 { 00:05:03.318 "dma_device_id": "system", 00:05:03.318 "dma_device_type": 1 00:05:03.318 }, 00:05:03.318 { 00:05:03.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.318 "dma_device_type": 2 00:05:03.318 } 00:05:03.318 ], 00:05:03.318 "driver_specific": {} 00:05:03.318 } 00:05:03.318 ]' 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.318 [2024-11-28 04:54:32.527230] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:03.318 [2024-11-28 04:54:32.527307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.318 [2024-11-28 04:54:32.527331] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:03.318 [2024-11-28 04:54:32.527341] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.318 [2024-11-28 04:54:32.529877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.318 [2024-11-28 04:54:32.529930] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.318 Passthru0 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.318 { 00:05:03.318 "name": "Malloc2", 00:05:03.318 "aliases": [ 00:05:03.318 "0de298e8-0373-4a9f-bbb7-5c85ccba12bb" 00:05:03.318 ], 00:05:03.318 "product_name": "Malloc disk", 00:05:03.318 "block_size": 512, 00:05:03.318 "num_blocks": 16384, 00:05:03.318 "uuid": "0de298e8-0373-4a9f-bbb7-5c85ccba12bb", 00:05:03.318 "assigned_rate_limits": { 00:05:03.318 "rw_ios_per_sec": 0, 00:05:03.318 "rw_mbytes_per_sec": 0, 00:05:03.318 "r_mbytes_per_sec": 0, 00:05:03.318 "w_mbytes_per_sec": 0 00:05:03.318 }, 00:05:03.318 "claimed": true, 00:05:03.318 "claim_type": "exclusive_write", 00:05:03.318 "zoned": false, 00:05:03.318 "supported_io_types": { 00:05:03.318 "read": true, 00:05:03.318 "write": true, 00:05:03.318 "unmap": true, 00:05:03.318 "flush": true, 00:05:03.318 "reset": true, 00:05:03.318 "nvme_admin": false, 00:05:03.318 "nvme_io": false, 00:05:03.318 "nvme_io_md": false, 00:05:03.318 "write_zeroes": true, 00:05:03.318 "zcopy": true, 00:05:03.318 "get_zone_info": false, 00:05:03.318 "zone_management": false, 00:05:03.318 "zone_append": false, 00:05:03.318 "compare": false, 00:05:03.318 "compare_and_write": false, 00:05:03.318 "abort": true, 00:05:03.318 "seek_hole": false, 00:05:03.318 "seek_data": false, 00:05:03.318 "copy": true, 00:05:03.318 "nvme_iov_md": false 00:05:03.318 }, 00:05:03.318 "memory_domains": [ 00:05:03.318 { 00:05:03.318 "dma_device_id": "system", 00:05:03.318 "dma_device_type": 1 00:05:03.318 }, 00:05:03.318 { 00:05:03.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.318 "dma_device_type": 2 00:05:03.318 } 00:05:03.318 ], 00:05:03.318 "driver_specific": {} 00:05:03.318 }, 00:05:03.318 { 00:05:03.318 "name": "Passthru0", 00:05:03.318 "aliases": [ 00:05:03.318 "b6d29b51-83c2-5c7a-a9f4-0841781dcf15" 00:05:03.318 ], 00:05:03.318 "product_name": "passthru", 00:05:03.318 "block_size": 512, 00:05:03.318 "num_blocks": 16384, 00:05:03.318 "uuid": "b6d29b51-83c2-5c7a-a9f4-0841781dcf15", 00:05:03.318 "assigned_rate_limits": { 00:05:03.318 "rw_ios_per_sec": 0, 00:05:03.318 "rw_mbytes_per_sec": 0, 00:05:03.318 "r_mbytes_per_sec": 0, 00:05:03.318 "w_mbytes_per_sec": 0 00:05:03.318 }, 00:05:03.318 "claimed": false, 00:05:03.318 "zoned": false, 00:05:03.318 "supported_io_types": { 00:05:03.318 "read": true, 00:05:03.318 "write": true, 00:05:03.318 "unmap": true, 00:05:03.318 "flush": true, 00:05:03.318 "reset": true, 00:05:03.318 "nvme_admin": false, 00:05:03.318 "nvme_io": false, 00:05:03.318 "nvme_io_md": false, 00:05:03.318 "write_zeroes": true, 00:05:03.318 "zcopy": true, 00:05:03.318 "get_zone_info": false, 00:05:03.318 "zone_management": false, 00:05:03.318 "zone_append": false, 00:05:03.318 "compare": false, 00:05:03.318 "compare_and_write": false, 00:05:03.318 "abort": true, 00:05:03.318 "seek_hole": false, 00:05:03.318 "seek_data": false, 00:05:03.318 "copy": true, 00:05:03.318 "nvme_iov_md": false 00:05:03.318 }, 00:05:03.318 "memory_domains": [ 00:05:03.318 { 00:05:03.318 "dma_device_id": "system", 00:05:03.318 "dma_device_type": 1 00:05:03.318 }, 00:05:03.318 { 00:05:03.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.318 "dma_device_type": 2 00:05:03.318 } 00:05:03.318 ], 00:05:03.318 "driver_specific": { 00:05:03.318 "passthru": { 00:05:03.318 "name": "Passthru0", 00:05:03.318 "base_bdev_name": "Malloc2" 00:05:03.318 } 00:05:03.318 } 00:05:03.318 } 00:05:03.318 ]' 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.318 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.319 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.319 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.319 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.319 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.319 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:03.319 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.319 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.319 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.580 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.580 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.580 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.580 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.580 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.580 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.580 04:54:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.581 00:05:03.581 real 0m0.232s 00:05:03.581 user 0m0.135s 00:05:03.581 sys 0m0.030s 00:05:03.581 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.581 04:54:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.581 ************************************ 00:05:03.581 END TEST rpc_daemon_integrity 00:05:03.581 ************************************ 00:05:03.581 04:54:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:03.581 04:54:32 rpc -- rpc/rpc.sh@84 -- # killprocess 69076 00:05:03.581 04:54:32 rpc -- common/autotest_common.sh@954 -- # '[' -z 69076 ']' 00:05:03.581 04:54:32 rpc -- common/autotest_common.sh@958 -- # kill -0 69076 00:05:03.581 04:54:32 rpc -- common/autotest_common.sh@959 -- # uname 00:05:03.581 04:54:32 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.581 04:54:32 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69076 00:05:03.581 04:54:32 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.581 killing process with pid 69076 00:05:03.581 04:54:32 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.581 04:54:32 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69076' 00:05:03.581 04:54:32 rpc -- common/autotest_common.sh@973 -- # kill 69076 00:05:03.581 04:54:32 rpc -- common/autotest_common.sh@978 -- # wait 69076 00:05:03.843 00:05:03.843 real 0m2.406s 00:05:03.843 user 0m2.780s 00:05:03.843 sys 0m0.677s 00:05:03.843 ************************************ 00:05:03.843 04:54:33 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.843 04:54:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.843 END TEST rpc 00:05:03.843 ************************************ 00:05:03.843 04:54:33 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:03.843 04:54:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.843 04:54:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.843 04:54:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.843 ************************************ 00:05:03.843 START TEST skip_rpc 00:05:03.843 ************************************ 00:05:03.843 04:54:33 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.104 * Looking for test storage... 00:05:04.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:04.104 04:54:33 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.104 04:54:33 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.104 04:54:33 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.104 04:54:33 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.104 04:54:33 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:04.104 04:54:33 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.104 04:54:33 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.104 --rc genhtml_branch_coverage=1 00:05:04.104 --rc genhtml_function_coverage=1 00:05:04.104 --rc genhtml_legend=1 00:05:04.104 --rc geninfo_all_blocks=1 00:05:04.104 --rc geninfo_unexecuted_blocks=1 00:05:04.104 00:05:04.104 ' 00:05:04.104 04:54:33 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.104 --rc genhtml_branch_coverage=1 00:05:04.104 --rc genhtml_function_coverage=1 00:05:04.104 --rc genhtml_legend=1 00:05:04.104 --rc geninfo_all_blocks=1 00:05:04.104 --rc geninfo_unexecuted_blocks=1 00:05:04.104 00:05:04.104 ' 00:05:04.104 04:54:33 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.104 --rc genhtml_branch_coverage=1 00:05:04.104 --rc genhtml_function_coverage=1 00:05:04.104 --rc genhtml_legend=1 00:05:04.105 --rc geninfo_all_blocks=1 00:05:04.105 --rc geninfo_unexecuted_blocks=1 00:05:04.105 00:05:04.105 ' 00:05:04.105 04:54:33 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.105 --rc genhtml_branch_coverage=1 00:05:04.105 --rc genhtml_function_coverage=1 00:05:04.105 --rc genhtml_legend=1 00:05:04.105 --rc geninfo_all_blocks=1 00:05:04.105 --rc geninfo_unexecuted_blocks=1 00:05:04.105 00:05:04.105 ' 00:05:04.105 04:54:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.105 04:54:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.105 04:54:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:04.105 04:54:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.105 04:54:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.105 04:54:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.105 ************************************ 00:05:04.105 START TEST skip_rpc 00:05:04.105 ************************************ 00:05:04.105 04:54:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:04.105 04:54:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69277 00:05:04.105 04:54:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:04.105 04:54:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.105 04:54:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:04.105 [2024-11-28 04:54:33.310052] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:04.105 [2024-11-28 04:54:33.310226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69277 ] 00:05:04.366 [2024-11-28 04:54:33.459864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.366 [2024-11-28 04:54:33.488499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69277 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69277 ']' 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69277 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69277 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.642 killing process with pid 69277 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69277' 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69277 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69277 00:05:09.642 00:05:09.642 real 0m5.256s 00:05:09.642 user 0m4.876s 00:05:09.642 sys 0m0.276s 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.642 04:54:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.642 ************************************ 00:05:09.642 END TEST skip_rpc 00:05:09.642 ************************************ 00:05:09.642 04:54:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:09.642 04:54:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.642 04:54:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.642 04:54:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.642 ************************************ 00:05:09.642 START TEST skip_rpc_with_json 00:05:09.642 ************************************ 00:05:09.642 04:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:09.642 04:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:09.642 04:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69359 00:05:09.642 04:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.642 04:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69359 00:05:09.642 04:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69359 ']' 00:05:09.642 04:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.642 04:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.642 04:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.642 04:54:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.642 04:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.642 04:54:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.642 [2024-11-28 04:54:38.615149] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:09.642 [2024-11-28 04:54:38.615292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69359 ] 00:05:09.642 [2024-11-28 04:54:38.759410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.642 [2024-11-28 04:54:38.781938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.208 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.209 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:10.209 04:54:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:10.209 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.209 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.209 [2024-11-28 04:54:39.449892] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:10.209 request: 00:05:10.209 { 00:05:10.209 "trtype": "tcp", 00:05:10.209 "method": "nvmf_get_transports", 00:05:10.209 "req_id": 1 00:05:10.209 } 00:05:10.209 Got JSON-RPC error response 00:05:10.209 response: 00:05:10.209 { 00:05:10.209 "code": -19, 00:05:10.209 "message": "No such device" 00:05:10.209 } 00:05:10.209 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:10.209 04:54:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:10.209 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.209 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.209 [2024-11-28 04:54:39.461991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.209 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.209 04:54:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:10.209 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.209 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:10.468 { 00:05:10.468 "subsystems": [ 00:05:10.468 { 00:05:10.468 "subsystem": "fsdev", 00:05:10.468 "config": [ 00:05:10.468 { 00:05:10.468 "method": "fsdev_set_opts", 00:05:10.468 "params": { 00:05:10.468 "fsdev_io_pool_size": 65535, 00:05:10.468 "fsdev_io_cache_size": 256 00:05:10.468 } 00:05:10.468 } 00:05:10.468 ] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "keyring", 00:05:10.468 "config": [] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "iobuf", 00:05:10.468 "config": [ 00:05:10.468 { 00:05:10.468 "method": "iobuf_set_options", 00:05:10.468 "params": { 00:05:10.468 "small_pool_count": 8192, 00:05:10.468 "large_pool_count": 1024, 00:05:10.468 "small_bufsize": 8192, 00:05:10.468 "large_bufsize": 135168, 00:05:10.468 "enable_numa": false 00:05:10.468 } 00:05:10.468 } 00:05:10.468 ] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "sock", 00:05:10.468 "config": [ 00:05:10.468 { 00:05:10.468 "method": "sock_set_default_impl", 00:05:10.468 "params": { 00:05:10.468 "impl_name": "posix" 00:05:10.468 } 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "method": "sock_impl_set_options", 00:05:10.468 "params": { 00:05:10.468 "impl_name": "ssl", 00:05:10.468 "recv_buf_size": 4096, 00:05:10.468 "send_buf_size": 4096, 00:05:10.468 "enable_recv_pipe": true, 00:05:10.468 "enable_quickack": false, 00:05:10.468 "enable_placement_id": 0, 00:05:10.468 "enable_zerocopy_send_server": true, 00:05:10.468 "enable_zerocopy_send_client": false, 00:05:10.468 "zerocopy_threshold": 0, 00:05:10.468 "tls_version": 0, 00:05:10.468 "enable_ktls": false 00:05:10.468 } 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "method": "sock_impl_set_options", 00:05:10.468 "params": { 00:05:10.468 "impl_name": "posix", 00:05:10.468 "recv_buf_size": 2097152, 00:05:10.468 "send_buf_size": 2097152, 00:05:10.468 "enable_recv_pipe": true, 00:05:10.468 "enable_quickack": false, 00:05:10.468 "enable_placement_id": 0, 00:05:10.468 "enable_zerocopy_send_server": true, 00:05:10.468 "enable_zerocopy_send_client": false, 00:05:10.468 "zerocopy_threshold": 0, 00:05:10.468 "tls_version": 0, 00:05:10.468 "enable_ktls": false 00:05:10.468 } 00:05:10.468 } 00:05:10.468 ] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "vmd", 00:05:10.468 "config": [] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "accel", 00:05:10.468 "config": [ 00:05:10.468 { 00:05:10.468 "method": "accel_set_options", 00:05:10.468 "params": { 00:05:10.468 "small_cache_size": 128, 00:05:10.468 "large_cache_size": 16, 00:05:10.468 "task_count": 2048, 00:05:10.468 "sequence_count": 2048, 00:05:10.468 "buf_count": 2048 00:05:10.468 } 00:05:10.468 } 00:05:10.468 ] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "bdev", 00:05:10.468 "config": [ 00:05:10.468 { 00:05:10.468 "method": "bdev_set_options", 00:05:10.468 "params": { 00:05:10.468 "bdev_io_pool_size": 65535, 00:05:10.468 "bdev_io_cache_size": 256, 00:05:10.468 "bdev_auto_examine": true, 00:05:10.468 "iobuf_small_cache_size": 128, 00:05:10.468 "iobuf_large_cache_size": 16 00:05:10.468 } 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "method": "bdev_raid_set_options", 00:05:10.468 "params": { 00:05:10.468 "process_window_size_kb": 1024, 00:05:10.468 "process_max_bandwidth_mb_sec": 0 00:05:10.468 } 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "method": "bdev_iscsi_set_options", 00:05:10.468 "params": { 00:05:10.468 "timeout_sec": 30 00:05:10.468 } 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "method": "bdev_nvme_set_options", 00:05:10.468 "params": { 00:05:10.468 "action_on_timeout": "none", 00:05:10.468 "timeout_us": 0, 00:05:10.468 "timeout_admin_us": 0, 00:05:10.468 "keep_alive_timeout_ms": 10000, 00:05:10.468 "arbitration_burst": 0, 00:05:10.468 "low_priority_weight": 0, 00:05:10.468 "medium_priority_weight": 0, 00:05:10.468 "high_priority_weight": 0, 00:05:10.468 "nvme_adminq_poll_period_us": 10000, 00:05:10.468 "nvme_ioq_poll_period_us": 0, 00:05:10.468 "io_queue_requests": 0, 00:05:10.468 "delay_cmd_submit": true, 00:05:10.468 "transport_retry_count": 4, 00:05:10.468 "bdev_retry_count": 3, 00:05:10.468 "transport_ack_timeout": 0, 00:05:10.468 "ctrlr_loss_timeout_sec": 0, 00:05:10.468 "reconnect_delay_sec": 0, 00:05:10.468 "fast_io_fail_timeout_sec": 0, 00:05:10.468 "disable_auto_failback": false, 00:05:10.468 "generate_uuids": false, 00:05:10.468 "transport_tos": 0, 00:05:10.468 "nvme_error_stat": false, 00:05:10.468 "rdma_srq_size": 0, 00:05:10.468 "io_path_stat": false, 00:05:10.468 "allow_accel_sequence": false, 00:05:10.468 "rdma_max_cq_size": 0, 00:05:10.468 "rdma_cm_event_timeout_ms": 0, 00:05:10.468 "dhchap_digests": [ 00:05:10.468 "sha256", 00:05:10.468 "sha384", 00:05:10.468 "sha512" 00:05:10.468 ], 00:05:10.468 "dhchap_dhgroups": [ 00:05:10.468 "null", 00:05:10.468 "ffdhe2048", 00:05:10.468 "ffdhe3072", 00:05:10.468 "ffdhe4096", 00:05:10.468 "ffdhe6144", 00:05:10.468 "ffdhe8192" 00:05:10.468 ] 00:05:10.468 } 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "method": "bdev_nvme_set_hotplug", 00:05:10.468 "params": { 00:05:10.468 "period_us": 100000, 00:05:10.468 "enable": false 00:05:10.468 } 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "method": "bdev_wait_for_examine" 00:05:10.468 } 00:05:10.468 ] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "scsi", 00:05:10.468 "config": null 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "scheduler", 00:05:10.468 "config": [ 00:05:10.468 { 00:05:10.468 "method": "framework_set_scheduler", 00:05:10.468 "params": { 00:05:10.468 "name": "static" 00:05:10.468 } 00:05:10.468 } 00:05:10.468 ] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "vhost_scsi", 00:05:10.468 "config": [] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "vhost_blk", 00:05:10.468 "config": [] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "ublk", 00:05:10.468 "config": [] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "nbd", 00:05:10.468 "config": [] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "nvmf", 00:05:10.468 "config": [ 00:05:10.468 { 00:05:10.468 "method": "nvmf_set_config", 00:05:10.468 "params": { 00:05:10.468 "discovery_filter": "match_any", 00:05:10.468 "admin_cmd_passthru": { 00:05:10.468 "identify_ctrlr": false 00:05:10.468 }, 00:05:10.468 "dhchap_digests": [ 00:05:10.468 "sha256", 00:05:10.468 "sha384", 00:05:10.468 "sha512" 00:05:10.468 ], 00:05:10.468 "dhchap_dhgroups": [ 00:05:10.468 "null", 00:05:10.468 "ffdhe2048", 00:05:10.468 "ffdhe3072", 00:05:10.468 "ffdhe4096", 00:05:10.468 "ffdhe6144", 00:05:10.468 "ffdhe8192" 00:05:10.468 ] 00:05:10.468 } 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "method": "nvmf_set_max_subsystems", 00:05:10.468 "params": { 00:05:10.468 "max_subsystems": 1024 00:05:10.468 } 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "method": "nvmf_set_crdt", 00:05:10.468 "params": { 00:05:10.468 "crdt1": 0, 00:05:10.468 "crdt2": 0, 00:05:10.468 "crdt3": 0 00:05:10.468 } 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "method": "nvmf_create_transport", 00:05:10.468 "params": { 00:05:10.468 "trtype": "TCP", 00:05:10.468 "max_queue_depth": 128, 00:05:10.468 "max_io_qpairs_per_ctrlr": 127, 00:05:10.468 "in_capsule_data_size": 4096, 00:05:10.468 "max_io_size": 131072, 00:05:10.468 "io_unit_size": 131072, 00:05:10.468 "max_aq_depth": 128, 00:05:10.468 "num_shared_buffers": 511, 00:05:10.468 "buf_cache_size": 4294967295, 00:05:10.468 "dif_insert_or_strip": false, 00:05:10.468 "zcopy": false, 00:05:10.468 "c2h_success": true, 00:05:10.468 "sock_priority": 0, 00:05:10.468 "abort_timeout_sec": 1, 00:05:10.468 "ack_timeout": 0, 00:05:10.468 "data_wr_pool_size": 0 00:05:10.468 } 00:05:10.468 } 00:05:10.468 ] 00:05:10.468 }, 00:05:10.468 { 00:05:10.468 "subsystem": "iscsi", 00:05:10.468 "config": [ 00:05:10.468 { 00:05:10.468 "method": "iscsi_set_options", 00:05:10.468 "params": { 00:05:10.468 "node_base": "iqn.2016-06.io.spdk", 00:05:10.468 "max_sessions": 128, 00:05:10.468 "max_connections_per_session": 2, 00:05:10.468 "max_queue_depth": 64, 00:05:10.468 "default_time2wait": 2, 00:05:10.468 "default_time2retain": 20, 00:05:10.468 "first_burst_length": 8192, 00:05:10.468 "immediate_data": true, 00:05:10.468 "allow_duplicated_isid": false, 00:05:10.468 "error_recovery_level": 0, 00:05:10.468 "nop_timeout": 60, 00:05:10.468 "nop_in_interval": 30, 00:05:10.468 "disable_chap": false, 00:05:10.468 "require_chap": false, 00:05:10.468 "mutual_chap": false, 00:05:10.468 "chap_group": 0, 00:05:10.468 "max_large_datain_per_connection": 64, 00:05:10.468 "max_r2t_per_connection": 4, 00:05:10.468 "pdu_pool_size": 36864, 00:05:10.468 "immediate_data_pool_size": 16384, 00:05:10.468 "data_out_pool_size": 2048 00:05:10.468 } 00:05:10.468 } 00:05:10.468 ] 00:05:10.468 } 00:05:10.468 ] 00:05:10.468 } 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69359 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69359 ']' 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69359 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69359 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.468 killing process with pid 69359 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69359' 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69359 00:05:10.468 04:54:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69359 00:05:10.728 04:54:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69382 00:05:10.728 04:54:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:10.728 04:54:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:16.078 04:54:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69382 00:05:16.078 04:54:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69382 ']' 00:05:16.078 04:54:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69382 00:05:16.078 04:54:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:16.078 04:54:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.078 04:54:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69382 00:05:16.078 04:54:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.078 04:54:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.078 killing process with pid 69382 00:05:16.078 04:54:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69382' 00:05:16.078 04:54:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69382 00:05:16.078 04:54:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69382 00:05:16.078 04:54:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:16.078 04:54:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:16.078 00:05:16.078 real 0m6.584s 00:05:16.078 user 0m6.279s 00:05:16.078 sys 0m0.525s 00:05:16.078 04:54:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.078 ************************************ 00:05:16.078 END TEST skip_rpc_with_json 00:05:16.078 ************************************ 00:05:16.078 04:54:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.078 04:54:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:16.078 04:54:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.078 04:54:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.078 04:54:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.078 ************************************ 00:05:16.078 START TEST skip_rpc_with_delay 00:05:16.078 ************************************ 00:05:16.078 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:16.078 04:54:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.078 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:16.078 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.078 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.079 [2024-11-28 04:54:45.261962] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:16.079 ************************************ 00:05:16.079 END TEST skip_rpc_with_delay 00:05:16.079 ************************************ 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.079 00:05:16.079 real 0m0.121s 00:05:16.079 user 0m0.068s 00:05:16.079 sys 0m0.052s 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.079 04:54:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:16.353 04:54:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:16.353 04:54:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:16.353 04:54:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:16.353 04:54:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.353 04:54:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.353 04:54:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.353 ************************************ 00:05:16.353 START TEST exit_on_failed_rpc_init 00:05:16.353 ************************************ 00:05:16.353 04:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:16.353 04:54:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69495 00:05:16.353 04:54:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69495 00:05:16.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.353 04:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69495 ']' 00:05:16.353 04:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.353 04:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.353 04:54:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.353 04:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.353 04:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.353 04:54:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.353 [2024-11-28 04:54:45.445137] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:16.353 [2024-11-28 04:54:45.445298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69495 ] 00:05:16.353 [2024-11-28 04:54:45.592751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.354 [2024-11-28 04:54:45.621774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:17.300 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:17.300 [2024-11-28 04:54:46.382548] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:17.300 [2024-11-28 04:54:46.383069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69512 ] 00:05:17.300 [2024-11-28 04:54:46.528778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.300 [2024-11-28 04:54:46.558455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.300 [2024-11-28 04:54:46.558564] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:17.300 [2024-11-28 04:54:46.558586] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:17.300 [2024-11-28 04:54:46.558600] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69495 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69495 ']' 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69495 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69495 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.562 killing process with pid 69495 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69495' 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69495 00:05:17.562 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69495 00:05:17.824 00:05:17.824 real 0m1.598s 00:05:17.824 user 0m1.700s 00:05:17.824 sys 0m0.466s 00:05:17.824 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.824 04:54:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.824 ************************************ 00:05:17.824 END TEST exit_on_failed_rpc_init 00:05:17.824 ************************************ 00:05:17.824 04:54:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:17.824 ************************************ 00:05:17.824 END TEST skip_rpc 00:05:17.824 ************************************ 00:05:17.824 00:05:17.824 real 0m13.946s 00:05:17.824 user 0m13.061s 00:05:17.824 sys 0m1.506s 00:05:17.824 04:54:47 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.824 04:54:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.824 04:54:47 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:17.824 04:54:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.824 04:54:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.824 04:54:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.824 ************************************ 00:05:17.824 START TEST rpc_client 00:05:17.824 ************************************ 00:05:17.824 04:54:47 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:18.087 * Looking for test storage... 00:05:18.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:18.087 04:54:47 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.087 04:54:47 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.087 04:54:47 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.087 04:54:47 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.087 04:54:47 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:18.087 04:54:47 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.087 04:54:47 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.087 --rc genhtml_branch_coverage=1 00:05:18.087 --rc genhtml_function_coverage=1 00:05:18.087 --rc genhtml_legend=1 00:05:18.087 --rc geninfo_all_blocks=1 00:05:18.087 --rc geninfo_unexecuted_blocks=1 00:05:18.087 00:05:18.087 ' 00:05:18.087 04:54:47 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.087 --rc genhtml_branch_coverage=1 00:05:18.087 --rc genhtml_function_coverage=1 00:05:18.087 --rc genhtml_legend=1 00:05:18.087 --rc geninfo_all_blocks=1 00:05:18.087 --rc geninfo_unexecuted_blocks=1 00:05:18.087 00:05:18.087 ' 00:05:18.087 04:54:47 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.087 --rc genhtml_branch_coverage=1 00:05:18.087 --rc genhtml_function_coverage=1 00:05:18.087 --rc genhtml_legend=1 00:05:18.087 --rc geninfo_all_blocks=1 00:05:18.087 --rc geninfo_unexecuted_blocks=1 00:05:18.087 00:05:18.087 ' 00:05:18.087 04:54:47 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.087 --rc genhtml_branch_coverage=1 00:05:18.087 --rc genhtml_function_coverage=1 00:05:18.087 --rc genhtml_legend=1 00:05:18.087 --rc geninfo_all_blocks=1 00:05:18.087 --rc geninfo_unexecuted_blocks=1 00:05:18.087 00:05:18.087 ' 00:05:18.087 04:54:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:18.087 OK 00:05:18.087 04:54:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:18.087 00:05:18.087 real 0m0.188s 00:05:18.087 user 0m0.103s 00:05:18.087 sys 0m0.088s 00:05:18.087 04:54:47 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.087 04:54:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:18.087 ************************************ 00:05:18.087 END TEST rpc_client 00:05:18.087 ************************************ 00:05:18.087 04:54:47 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:18.087 04:54:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.087 04:54:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.087 04:54:47 -- common/autotest_common.sh@10 -- # set +x 00:05:18.087 ************************************ 00:05:18.087 START TEST json_config 00:05:18.087 ************************************ 00:05:18.087 04:54:47 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:18.349 04:54:47 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.349 04:54:47 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.349 04:54:47 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.349 04:54:47 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.349 04:54:47 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.349 04:54:47 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.349 04:54:47 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.349 04:54:47 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.349 04:54:47 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.349 04:54:47 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.349 04:54:47 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.349 04:54:47 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.350 04:54:47 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.350 04:54:47 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.350 04:54:47 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.350 04:54:47 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:18.350 04:54:47 json_config -- scripts/common.sh@345 -- # : 1 00:05:18.350 04:54:47 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.350 04:54:47 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.350 04:54:47 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:18.350 04:54:47 json_config -- scripts/common.sh@353 -- # local d=1 00:05:18.350 04:54:47 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.350 04:54:47 json_config -- scripts/common.sh@355 -- # echo 1 00:05:18.350 04:54:47 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.350 04:54:47 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:18.350 04:54:47 json_config -- scripts/common.sh@353 -- # local d=2 00:05:18.350 04:54:47 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.350 04:54:47 json_config -- scripts/common.sh@355 -- # echo 2 00:05:18.350 04:54:47 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.350 04:54:47 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.350 04:54:47 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.350 04:54:47 json_config -- scripts/common.sh@368 -- # return 0 00:05:18.350 04:54:47 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.350 04:54:47 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.350 --rc genhtml_branch_coverage=1 00:05:18.350 --rc genhtml_function_coverage=1 00:05:18.350 --rc genhtml_legend=1 00:05:18.350 --rc geninfo_all_blocks=1 00:05:18.350 --rc geninfo_unexecuted_blocks=1 00:05:18.350 00:05:18.350 ' 00:05:18.350 04:54:47 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.350 --rc genhtml_branch_coverage=1 00:05:18.350 --rc genhtml_function_coverage=1 00:05:18.350 --rc genhtml_legend=1 00:05:18.350 --rc geninfo_all_blocks=1 00:05:18.350 --rc geninfo_unexecuted_blocks=1 00:05:18.350 00:05:18.350 ' 00:05:18.350 04:54:47 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.350 --rc genhtml_branch_coverage=1 00:05:18.350 --rc genhtml_function_coverage=1 00:05:18.350 --rc genhtml_legend=1 00:05:18.350 --rc geninfo_all_blocks=1 00:05:18.350 --rc geninfo_unexecuted_blocks=1 00:05:18.350 00:05:18.350 ' 00:05:18.350 04:54:47 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.350 --rc genhtml_branch_coverage=1 00:05:18.350 --rc genhtml_function_coverage=1 00:05:18.350 --rc genhtml_legend=1 00:05:18.350 --rc geninfo_all_blocks=1 00:05:18.350 --rc geninfo_unexecuted_blocks=1 00:05:18.350 00:05:18.350 ' 00:05:18.350 04:54:47 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3e346057-9c48-4fc2-acf8-735d220de68f 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3e346057-9c48-4fc2-acf8-735d220de68f 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:18.350 04:54:47 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.350 04:54:47 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.350 04:54:47 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.350 04:54:47 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.350 04:54:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.350 04:54:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.350 04:54:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.350 04:54:47 json_config -- paths/export.sh@5 -- # export PATH 00:05:18.350 04:54:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@51 -- # : 0 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.350 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.350 04:54:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.350 04:54:47 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:18.350 04:54:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:18.350 04:54:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:18.350 04:54:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:18.350 04:54:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:18.350 WARNING: No tests are enabled so not running JSON configuration tests 00:05:18.350 04:54:47 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:18.350 04:54:47 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:18.350 00:05:18.350 real 0m0.142s 00:05:18.350 user 0m0.088s 00:05:18.351 sys 0m0.054s 00:05:18.351 04:54:47 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.351 04:54:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.351 ************************************ 00:05:18.351 END TEST json_config 00:05:18.351 ************************************ 00:05:18.351 04:54:47 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.351 04:54:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.351 04:54:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.351 04:54:47 -- common/autotest_common.sh@10 -- # set +x 00:05:18.351 ************************************ 00:05:18.351 START TEST json_config_extra_key 00:05:18.351 ************************************ 00:05:18.351 04:54:47 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.351 04:54:47 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.351 04:54:47 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.351 04:54:47 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.613 04:54:47 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:18.613 04:54:47 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.613 04:54:47 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.613 --rc genhtml_branch_coverage=1 00:05:18.613 --rc genhtml_function_coverage=1 00:05:18.613 --rc genhtml_legend=1 00:05:18.613 --rc geninfo_all_blocks=1 00:05:18.613 --rc geninfo_unexecuted_blocks=1 00:05:18.613 00:05:18.613 ' 00:05:18.613 04:54:47 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.613 --rc genhtml_branch_coverage=1 00:05:18.613 --rc genhtml_function_coverage=1 00:05:18.613 --rc genhtml_legend=1 00:05:18.613 --rc geninfo_all_blocks=1 00:05:18.613 --rc geninfo_unexecuted_blocks=1 00:05:18.613 00:05:18.613 ' 00:05:18.613 04:54:47 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.613 --rc genhtml_branch_coverage=1 00:05:18.613 --rc genhtml_function_coverage=1 00:05:18.613 --rc genhtml_legend=1 00:05:18.613 --rc geninfo_all_blocks=1 00:05:18.613 --rc geninfo_unexecuted_blocks=1 00:05:18.613 00:05:18.613 ' 00:05:18.613 04:54:47 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.613 --rc genhtml_branch_coverage=1 00:05:18.613 --rc genhtml_function_coverage=1 00:05:18.613 --rc genhtml_legend=1 00:05:18.613 --rc geninfo_all_blocks=1 00:05:18.613 --rc geninfo_unexecuted_blocks=1 00:05:18.613 00:05:18.613 ' 00:05:18.613 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3e346057-9c48-4fc2-acf8-735d220de68f 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3e346057-9c48-4fc2-acf8-735d220de68f 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.613 04:54:47 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.613 04:54:47 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.613 04:54:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.613 04:54:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.613 04:54:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.613 04:54:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:18.614 04:54:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.614 04:54:47 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:18.614 04:54:47 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.614 04:54:47 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.614 04:54:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.614 04:54:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.614 04:54:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.614 04:54:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.614 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.614 04:54:47 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.614 04:54:47 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.614 04:54:47 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.614 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:18.614 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:18.614 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:18.614 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:18.614 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:18.614 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:18.614 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:18.614 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:18.614 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:18.614 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.614 INFO: launching applications... 00:05:18.614 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:18.614 04:54:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:18.614 04:54:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:18.614 04:54:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:18.614 04:54:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.614 04:54:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.614 04:54:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.614 04:54:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.614 04:54:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.614 04:54:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69694 00:05:18.614 04:54:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.614 Waiting for target to run... 00:05:18.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.614 04:54:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69694 /var/tmp/spdk_tgt.sock 00:05:18.614 04:54:47 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69694 ']' 00:05:18.614 04:54:47 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.614 04:54:47 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.614 04:54:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:18.614 04:54:47 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.614 04:54:47 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.614 04:54:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.614 [2024-11-28 04:54:47.776870] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:18.614 [2024-11-28 04:54:47.777022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69694 ] 00:05:18.875 [2024-11-28 04:54:48.155767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.136 [2024-11-28 04:54:48.172223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.398 00:05:19.398 INFO: shutting down applications... 00:05:19.398 04:54:48 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.398 04:54:48 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:19.398 04:54:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:19.398 04:54:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:19.398 04:54:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:19.398 04:54:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:19.398 04:54:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.398 04:54:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69694 ]] 00:05:19.398 04:54:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69694 00:05:19.398 04:54:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.398 04:54:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.398 04:54:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69694 00:05:19.398 04:54:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.971 04:54:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.971 04:54:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.971 SPDK target shutdown done 00:05:19.971 04:54:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69694 00:05:19.971 04:54:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:19.972 04:54:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:19.972 04:54:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:19.972 04:54:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:19.972 Success 00:05:19.972 04:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:19.972 ************************************ 00:05:19.972 END TEST json_config_extra_key 00:05:19.972 ************************************ 00:05:19.972 00:05:19.972 real 0m1.589s 00:05:19.972 user 0m1.324s 00:05:19.972 sys 0m0.440s 00:05:19.972 04:54:49 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.972 04:54:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.972 04:54:49 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:19.972 04:54:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.972 04:54:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.972 04:54:49 -- common/autotest_common.sh@10 -- # set +x 00:05:19.972 ************************************ 00:05:19.972 START TEST alias_rpc 00:05:19.972 ************************************ 00:05:19.972 04:54:49 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.234 * Looking for test storage... 00:05:20.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.234 04:54:49 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.234 --rc genhtml_branch_coverage=1 00:05:20.234 --rc genhtml_function_coverage=1 00:05:20.234 --rc genhtml_legend=1 00:05:20.234 --rc geninfo_all_blocks=1 00:05:20.234 --rc geninfo_unexecuted_blocks=1 00:05:20.234 00:05:20.234 ' 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.234 --rc genhtml_branch_coverage=1 00:05:20.234 --rc genhtml_function_coverage=1 00:05:20.234 --rc genhtml_legend=1 00:05:20.234 --rc geninfo_all_blocks=1 00:05:20.234 --rc geninfo_unexecuted_blocks=1 00:05:20.234 00:05:20.234 ' 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.234 --rc genhtml_branch_coverage=1 00:05:20.234 --rc genhtml_function_coverage=1 00:05:20.234 --rc genhtml_legend=1 00:05:20.234 --rc geninfo_all_blocks=1 00:05:20.234 --rc geninfo_unexecuted_blocks=1 00:05:20.234 00:05:20.234 ' 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.234 --rc genhtml_branch_coverage=1 00:05:20.234 --rc genhtml_function_coverage=1 00:05:20.234 --rc genhtml_legend=1 00:05:20.234 --rc geninfo_all_blocks=1 00:05:20.234 --rc geninfo_unexecuted_blocks=1 00:05:20.234 00:05:20.234 ' 00:05:20.234 04:54:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.234 04:54:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69768 00:05:20.234 04:54:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69768 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69768 ']' 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.234 04:54:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.234 04:54:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.234 [2024-11-28 04:54:49.440766] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:20.234 [2024-11-28 04:54:49.441117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69768 ] 00:05:20.495 [2024-11-28 04:54:49.583835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.495 [2024-11-28 04:54:49.615048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.067 04:54:50 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.067 04:54:50 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:21.067 04:54:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:21.328 04:54:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69768 00:05:21.328 04:54:50 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69768 ']' 00:05:21.328 04:54:50 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69768 00:05:21.328 04:54:50 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:21.328 04:54:50 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.328 04:54:50 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69768 00:05:21.328 killing process with pid 69768 00:05:21.328 04:54:50 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.328 04:54:50 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.328 04:54:50 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69768' 00:05:21.328 04:54:50 alias_rpc -- common/autotest_common.sh@973 -- # kill 69768 00:05:21.328 04:54:50 alias_rpc -- common/autotest_common.sh@978 -- # wait 69768 00:05:21.900 00:05:21.900 real 0m1.692s 00:05:21.900 user 0m1.765s 00:05:21.900 sys 0m0.462s 00:05:21.900 04:54:50 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.900 ************************************ 00:05:21.900 END TEST alias_rpc 00:05:21.900 ************************************ 00:05:21.900 04:54:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.900 04:54:50 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:21.900 04:54:50 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:21.900 04:54:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.900 04:54:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.900 04:54:50 -- common/autotest_common.sh@10 -- # set +x 00:05:21.900 ************************************ 00:05:21.900 START TEST spdkcli_tcp 00:05:21.900 ************************************ 00:05:21.900 04:54:50 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:21.900 * Looking for test storage... 00:05:21.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.900 04:54:51 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.900 --rc genhtml_branch_coverage=1 00:05:21.900 --rc genhtml_function_coverage=1 00:05:21.900 --rc genhtml_legend=1 00:05:21.900 --rc geninfo_all_blocks=1 00:05:21.900 --rc geninfo_unexecuted_blocks=1 00:05:21.900 00:05:21.900 ' 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.900 --rc genhtml_branch_coverage=1 00:05:21.900 --rc genhtml_function_coverage=1 00:05:21.900 --rc genhtml_legend=1 00:05:21.900 --rc geninfo_all_blocks=1 00:05:21.900 --rc geninfo_unexecuted_blocks=1 00:05:21.900 00:05:21.900 ' 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.900 --rc genhtml_branch_coverage=1 00:05:21.900 --rc genhtml_function_coverage=1 00:05:21.900 --rc genhtml_legend=1 00:05:21.900 --rc geninfo_all_blocks=1 00:05:21.900 --rc geninfo_unexecuted_blocks=1 00:05:21.900 00:05:21.900 ' 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.900 --rc genhtml_branch_coverage=1 00:05:21.900 --rc genhtml_function_coverage=1 00:05:21.900 --rc genhtml_legend=1 00:05:21.900 --rc geninfo_all_blocks=1 00:05:21.900 --rc geninfo_unexecuted_blocks=1 00:05:21.900 00:05:21.900 ' 00:05:21.900 04:54:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:21.900 04:54:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:21.900 04:54:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:21.900 04:54:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:21.900 04:54:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:21.900 04:54:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:21.900 04:54:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.900 04:54:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69847 00:05:21.900 04:54:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69847 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 69847 ']' 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.900 04:54:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:21.900 04:54:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.161 [2024-11-28 04:54:51.206476] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:22.161 [2024-11-28 04:54:51.206621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69847 ] 00:05:22.161 [2024-11-28 04:54:51.353339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.161 [2024-11-28 04:54:51.383555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.161 [2024-11-28 04:54:51.383613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.106 04:54:52 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.106 04:54:52 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:23.106 04:54:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69864 00:05:23.106 04:54:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:23.106 04:54:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:23.106 [ 00:05:23.106 "bdev_malloc_delete", 00:05:23.106 "bdev_malloc_create", 00:05:23.106 "bdev_null_resize", 00:05:23.106 "bdev_null_delete", 00:05:23.106 "bdev_null_create", 00:05:23.106 "bdev_nvme_cuse_unregister", 00:05:23.106 "bdev_nvme_cuse_register", 00:05:23.107 "bdev_opal_new_user", 00:05:23.107 "bdev_opal_set_lock_state", 00:05:23.107 "bdev_opal_delete", 00:05:23.107 "bdev_opal_get_info", 00:05:23.107 "bdev_opal_create", 00:05:23.107 "bdev_nvme_opal_revert", 00:05:23.107 "bdev_nvme_opal_init", 00:05:23.107 "bdev_nvme_send_cmd", 00:05:23.107 "bdev_nvme_set_keys", 00:05:23.107 "bdev_nvme_get_path_iostat", 00:05:23.107 "bdev_nvme_get_mdns_discovery_info", 00:05:23.107 "bdev_nvme_stop_mdns_discovery", 00:05:23.107 "bdev_nvme_start_mdns_discovery", 00:05:23.107 "bdev_nvme_set_multipath_policy", 00:05:23.107 "bdev_nvme_set_preferred_path", 00:05:23.107 "bdev_nvme_get_io_paths", 00:05:23.107 "bdev_nvme_remove_error_injection", 00:05:23.107 "bdev_nvme_add_error_injection", 00:05:23.107 "bdev_nvme_get_discovery_info", 00:05:23.107 "bdev_nvme_stop_discovery", 00:05:23.107 "bdev_nvme_start_discovery", 00:05:23.107 "bdev_nvme_get_controller_health_info", 00:05:23.107 "bdev_nvme_disable_controller", 00:05:23.107 "bdev_nvme_enable_controller", 00:05:23.107 "bdev_nvme_reset_controller", 00:05:23.107 "bdev_nvme_get_transport_statistics", 00:05:23.107 "bdev_nvme_apply_firmware", 00:05:23.107 "bdev_nvme_detach_controller", 00:05:23.107 "bdev_nvme_get_controllers", 00:05:23.107 "bdev_nvme_attach_controller", 00:05:23.107 "bdev_nvme_set_hotplug", 00:05:23.107 "bdev_nvme_set_options", 00:05:23.107 "bdev_passthru_delete", 00:05:23.107 "bdev_passthru_create", 00:05:23.107 "bdev_lvol_set_parent_bdev", 00:05:23.107 "bdev_lvol_set_parent", 00:05:23.107 "bdev_lvol_check_shallow_copy", 00:05:23.107 "bdev_lvol_start_shallow_copy", 00:05:23.107 "bdev_lvol_grow_lvstore", 00:05:23.107 "bdev_lvol_get_lvols", 00:05:23.107 "bdev_lvol_get_lvstores", 00:05:23.107 "bdev_lvol_delete", 00:05:23.107 "bdev_lvol_set_read_only", 00:05:23.107 "bdev_lvol_resize", 00:05:23.107 "bdev_lvol_decouple_parent", 00:05:23.107 "bdev_lvol_inflate", 00:05:23.107 "bdev_lvol_rename", 00:05:23.107 "bdev_lvol_clone_bdev", 00:05:23.107 "bdev_lvol_clone", 00:05:23.107 "bdev_lvol_snapshot", 00:05:23.107 "bdev_lvol_create", 00:05:23.107 "bdev_lvol_delete_lvstore", 00:05:23.107 "bdev_lvol_rename_lvstore", 00:05:23.107 "bdev_lvol_create_lvstore", 00:05:23.107 "bdev_raid_set_options", 00:05:23.107 "bdev_raid_remove_base_bdev", 00:05:23.107 "bdev_raid_add_base_bdev", 00:05:23.107 "bdev_raid_delete", 00:05:23.107 "bdev_raid_create", 00:05:23.107 "bdev_raid_get_bdevs", 00:05:23.107 "bdev_error_inject_error", 00:05:23.107 "bdev_error_delete", 00:05:23.107 "bdev_error_create", 00:05:23.107 "bdev_split_delete", 00:05:23.107 "bdev_split_create", 00:05:23.107 "bdev_delay_delete", 00:05:23.107 "bdev_delay_create", 00:05:23.107 "bdev_delay_update_latency", 00:05:23.107 "bdev_zone_block_delete", 00:05:23.107 "bdev_zone_block_create", 00:05:23.107 "blobfs_create", 00:05:23.107 "blobfs_detect", 00:05:23.107 "blobfs_set_cache_size", 00:05:23.107 "bdev_xnvme_delete", 00:05:23.107 "bdev_xnvme_create", 00:05:23.107 "bdev_aio_delete", 00:05:23.107 "bdev_aio_rescan", 00:05:23.107 "bdev_aio_create", 00:05:23.107 "bdev_ftl_set_property", 00:05:23.107 "bdev_ftl_get_properties", 00:05:23.107 "bdev_ftl_get_stats", 00:05:23.107 "bdev_ftl_unmap", 00:05:23.107 "bdev_ftl_unload", 00:05:23.107 "bdev_ftl_delete", 00:05:23.107 "bdev_ftl_load", 00:05:23.107 "bdev_ftl_create", 00:05:23.107 "bdev_virtio_attach_controller", 00:05:23.107 "bdev_virtio_scsi_get_devices", 00:05:23.107 "bdev_virtio_detach_controller", 00:05:23.107 "bdev_virtio_blk_set_hotplug", 00:05:23.107 "bdev_iscsi_delete", 00:05:23.107 "bdev_iscsi_create", 00:05:23.107 "bdev_iscsi_set_options", 00:05:23.107 "accel_error_inject_error", 00:05:23.107 "ioat_scan_accel_module", 00:05:23.107 "dsa_scan_accel_module", 00:05:23.107 "iaa_scan_accel_module", 00:05:23.107 "keyring_file_remove_key", 00:05:23.107 "keyring_file_add_key", 00:05:23.107 "keyring_linux_set_options", 00:05:23.107 "fsdev_aio_delete", 00:05:23.107 "fsdev_aio_create", 00:05:23.107 "iscsi_get_histogram", 00:05:23.107 "iscsi_enable_histogram", 00:05:23.107 "iscsi_set_options", 00:05:23.107 "iscsi_get_auth_groups", 00:05:23.107 "iscsi_auth_group_remove_secret", 00:05:23.107 "iscsi_auth_group_add_secret", 00:05:23.107 "iscsi_delete_auth_group", 00:05:23.107 "iscsi_create_auth_group", 00:05:23.107 "iscsi_set_discovery_auth", 00:05:23.107 "iscsi_get_options", 00:05:23.107 "iscsi_target_node_request_logout", 00:05:23.107 "iscsi_target_node_set_redirect", 00:05:23.107 "iscsi_target_node_set_auth", 00:05:23.107 "iscsi_target_node_add_lun", 00:05:23.107 "iscsi_get_stats", 00:05:23.107 "iscsi_get_connections", 00:05:23.107 "iscsi_portal_group_set_auth", 00:05:23.107 "iscsi_start_portal_group", 00:05:23.107 "iscsi_delete_portal_group", 00:05:23.107 "iscsi_create_portal_group", 00:05:23.107 "iscsi_get_portal_groups", 00:05:23.107 "iscsi_delete_target_node", 00:05:23.107 "iscsi_target_node_remove_pg_ig_maps", 00:05:23.107 "iscsi_target_node_add_pg_ig_maps", 00:05:23.107 "iscsi_create_target_node", 00:05:23.107 "iscsi_get_target_nodes", 00:05:23.107 "iscsi_delete_initiator_group", 00:05:23.107 "iscsi_initiator_group_remove_initiators", 00:05:23.107 "iscsi_initiator_group_add_initiators", 00:05:23.107 "iscsi_create_initiator_group", 00:05:23.107 "iscsi_get_initiator_groups", 00:05:23.107 "nvmf_set_crdt", 00:05:23.107 "nvmf_set_config", 00:05:23.107 "nvmf_set_max_subsystems", 00:05:23.107 "nvmf_stop_mdns_prr", 00:05:23.107 "nvmf_publish_mdns_prr", 00:05:23.107 "nvmf_subsystem_get_listeners", 00:05:23.107 "nvmf_subsystem_get_qpairs", 00:05:23.107 "nvmf_subsystem_get_controllers", 00:05:23.107 "nvmf_get_stats", 00:05:23.107 "nvmf_get_transports", 00:05:23.107 "nvmf_create_transport", 00:05:23.107 "nvmf_get_targets", 00:05:23.107 "nvmf_delete_target", 00:05:23.107 "nvmf_create_target", 00:05:23.107 "nvmf_subsystem_allow_any_host", 00:05:23.107 "nvmf_subsystem_set_keys", 00:05:23.107 "nvmf_subsystem_remove_host", 00:05:23.107 "nvmf_subsystem_add_host", 00:05:23.107 "nvmf_ns_remove_host", 00:05:23.107 "nvmf_ns_add_host", 00:05:23.107 "nvmf_subsystem_remove_ns", 00:05:23.107 "nvmf_subsystem_set_ns_ana_group", 00:05:23.107 "nvmf_subsystem_add_ns", 00:05:23.107 "nvmf_subsystem_listener_set_ana_state", 00:05:23.107 "nvmf_discovery_get_referrals", 00:05:23.107 "nvmf_discovery_remove_referral", 00:05:23.107 "nvmf_discovery_add_referral", 00:05:23.107 "nvmf_subsystem_remove_listener", 00:05:23.107 "nvmf_subsystem_add_listener", 00:05:23.107 "nvmf_delete_subsystem", 00:05:23.107 "nvmf_create_subsystem", 00:05:23.107 "nvmf_get_subsystems", 00:05:23.107 "env_dpdk_get_mem_stats", 00:05:23.107 "nbd_get_disks", 00:05:23.107 "nbd_stop_disk", 00:05:23.107 "nbd_start_disk", 00:05:23.107 "ublk_recover_disk", 00:05:23.107 "ublk_get_disks", 00:05:23.107 "ublk_stop_disk", 00:05:23.107 "ublk_start_disk", 00:05:23.107 "ublk_destroy_target", 00:05:23.107 "ublk_create_target", 00:05:23.107 "virtio_blk_create_transport", 00:05:23.107 "virtio_blk_get_transports", 00:05:23.107 "vhost_controller_set_coalescing", 00:05:23.107 "vhost_get_controllers", 00:05:23.107 "vhost_delete_controller", 00:05:23.107 "vhost_create_blk_controller", 00:05:23.107 "vhost_scsi_controller_remove_target", 00:05:23.107 "vhost_scsi_controller_add_target", 00:05:23.107 "vhost_start_scsi_controller", 00:05:23.107 "vhost_create_scsi_controller", 00:05:23.107 "thread_set_cpumask", 00:05:23.107 "scheduler_set_options", 00:05:23.107 "framework_get_governor", 00:05:23.107 "framework_get_scheduler", 00:05:23.107 "framework_set_scheduler", 00:05:23.107 "framework_get_reactors", 00:05:23.107 "thread_get_io_channels", 00:05:23.107 "thread_get_pollers", 00:05:23.107 "thread_get_stats", 00:05:23.107 "framework_monitor_context_switch", 00:05:23.107 "spdk_kill_instance", 00:05:23.107 "log_enable_timestamps", 00:05:23.107 "log_get_flags", 00:05:23.107 "log_clear_flag", 00:05:23.107 "log_set_flag", 00:05:23.107 "log_get_level", 00:05:23.107 "log_set_level", 00:05:23.107 "log_get_print_level", 00:05:23.107 "log_set_print_level", 00:05:23.107 "framework_enable_cpumask_locks", 00:05:23.107 "framework_disable_cpumask_locks", 00:05:23.107 "framework_wait_init", 00:05:23.107 "framework_start_init", 00:05:23.107 "scsi_get_devices", 00:05:23.107 "bdev_get_histogram", 00:05:23.107 "bdev_enable_histogram", 00:05:23.107 "bdev_set_qos_limit", 00:05:23.107 "bdev_set_qd_sampling_period", 00:05:23.107 "bdev_get_bdevs", 00:05:23.107 "bdev_reset_iostat", 00:05:23.107 "bdev_get_iostat", 00:05:23.107 "bdev_examine", 00:05:23.107 "bdev_wait_for_examine", 00:05:23.107 "bdev_set_options", 00:05:23.107 "accel_get_stats", 00:05:23.107 "accel_set_options", 00:05:23.107 "accel_set_driver", 00:05:23.107 "accel_crypto_key_destroy", 00:05:23.107 "accel_crypto_keys_get", 00:05:23.107 "accel_crypto_key_create", 00:05:23.107 "accel_assign_opc", 00:05:23.107 "accel_get_module_info", 00:05:23.107 "accel_get_opc_assignments", 00:05:23.107 "vmd_rescan", 00:05:23.107 "vmd_remove_device", 00:05:23.107 "vmd_enable", 00:05:23.107 "sock_get_default_impl", 00:05:23.107 "sock_set_default_impl", 00:05:23.107 "sock_impl_set_options", 00:05:23.107 "sock_impl_get_options", 00:05:23.107 "iobuf_get_stats", 00:05:23.107 "iobuf_set_options", 00:05:23.107 "keyring_get_keys", 00:05:23.107 "framework_get_pci_devices", 00:05:23.108 "framework_get_config", 00:05:23.108 "framework_get_subsystems", 00:05:23.108 "fsdev_set_opts", 00:05:23.108 "fsdev_get_opts", 00:05:23.108 "trace_get_info", 00:05:23.108 "trace_get_tpoint_group_mask", 00:05:23.108 "trace_disable_tpoint_group", 00:05:23.108 "trace_enable_tpoint_group", 00:05:23.108 "trace_clear_tpoint_mask", 00:05:23.108 "trace_set_tpoint_mask", 00:05:23.108 "notify_get_notifications", 00:05:23.108 "notify_get_types", 00:05:23.108 "spdk_get_version", 00:05:23.108 "rpc_get_methods" 00:05:23.108 ] 00:05:23.108 04:54:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:23.108 04:54:52 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.108 04:54:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.108 04:54:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:23.108 04:54:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69847 00:05:23.108 04:54:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 69847 ']' 00:05:23.108 04:54:52 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 69847 00:05:23.108 04:54:52 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:23.108 04:54:52 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.108 04:54:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69847 00:05:23.108 killing process with pid 69847 00:05:23.108 04:54:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.108 04:54:52 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.108 04:54:52 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69847' 00:05:23.108 04:54:52 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 69847 00:05:23.108 04:54:52 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 69847 00:05:23.681 ************************************ 00:05:23.681 END TEST spdkcli_tcp 00:05:23.681 ************************************ 00:05:23.681 00:05:23.681 real 0m1.729s 00:05:23.681 user 0m3.031s 00:05:23.681 sys 0m0.490s 00:05:23.681 04:54:52 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.681 04:54:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.681 04:54:52 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.681 04:54:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.681 04:54:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.681 04:54:52 -- common/autotest_common.sh@10 -- # set +x 00:05:23.681 ************************************ 00:05:23.681 START TEST dpdk_mem_utility 00:05:23.681 ************************************ 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.681 * Looking for test storage... 00:05:23.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:23.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.681 04:54:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.681 --rc genhtml_branch_coverage=1 00:05:23.681 --rc genhtml_function_coverage=1 00:05:23.681 --rc genhtml_legend=1 00:05:23.681 --rc geninfo_all_blocks=1 00:05:23.681 --rc geninfo_unexecuted_blocks=1 00:05:23.681 00:05:23.681 ' 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.681 --rc genhtml_branch_coverage=1 00:05:23.681 --rc genhtml_function_coverage=1 00:05:23.681 --rc genhtml_legend=1 00:05:23.681 --rc geninfo_all_blocks=1 00:05:23.681 --rc geninfo_unexecuted_blocks=1 00:05:23.681 00:05:23.681 ' 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.681 --rc genhtml_branch_coverage=1 00:05:23.681 --rc genhtml_function_coverage=1 00:05:23.681 --rc genhtml_legend=1 00:05:23.681 --rc geninfo_all_blocks=1 00:05:23.681 --rc geninfo_unexecuted_blocks=1 00:05:23.681 00:05:23.681 ' 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.681 --rc genhtml_branch_coverage=1 00:05:23.681 --rc genhtml_function_coverage=1 00:05:23.681 --rc genhtml_legend=1 00:05:23.681 --rc geninfo_all_blocks=1 00:05:23.681 --rc geninfo_unexecuted_blocks=1 00:05:23.681 00:05:23.681 ' 00:05:23.681 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:23.681 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69947 00:05:23.681 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69947 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 69947 ']' 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.681 04:54:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.682 04:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.943 [2024-11-28 04:54:52.997304] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:23.943 [2024-11-28 04:54:52.997728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69947 ] 00:05:23.943 [2024-11-28 04:54:53.145115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.943 [2024-11-28 04:54:53.175390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.888 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.888 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:24.888 04:54:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:24.888 04:54:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:24.888 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.888 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.888 { 00:05:24.888 "filename": "/tmp/spdk_mem_dump.txt" 00:05:24.888 } 00:05:24.888 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.888 04:54:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.888 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:24.888 1 heaps totaling size 818.000000 MiB 00:05:24.888 size: 818.000000 MiB heap id: 0 00:05:24.888 end heaps---------- 00:05:24.888 9 mempools totaling size 603.782043 MiB 00:05:24.888 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:24.888 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:24.888 size: 100.555481 MiB name: bdev_io_69947 00:05:24.888 size: 50.003479 MiB name: msgpool_69947 00:05:24.888 size: 36.509338 MiB name: fsdev_io_69947 00:05:24.888 size: 21.763794 MiB name: PDU_Pool 00:05:24.888 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:24.888 size: 4.133484 MiB name: evtpool_69947 00:05:24.888 size: 0.026123 MiB name: Session_Pool 00:05:24.888 end mempools------- 00:05:24.888 6 memzones totaling size 4.142822 MiB 00:05:24.888 size: 1.000366 MiB name: RG_ring_0_69947 00:05:24.888 size: 1.000366 MiB name: RG_ring_1_69947 00:05:24.888 size: 1.000366 MiB name: RG_ring_4_69947 00:05:24.888 size: 1.000366 MiB name: RG_ring_5_69947 00:05:24.888 size: 0.125366 MiB name: RG_ring_2_69947 00:05:24.888 size: 0.015991 MiB name: RG_ring_3_69947 00:05:24.888 end memzones------- 00:05:24.888 04:54:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:24.888 heap id: 0 total size: 818.000000 MiB number of busy elements: 317 number of free elements: 15 00:05:24.888 list of free elements. size: 10.802490 MiB 00:05:24.888 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:24.888 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:24.888 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:24.888 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:24.888 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:24.888 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:24.888 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:24.888 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:24.888 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:05:24.888 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:24.888 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:24.888 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:24.888 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:24.888 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:24.888 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:24.888 list of standard malloc elements. size: 199.268616 MiB 00:05:24.888 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:24.888 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:24.888 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:24.888 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:24.888 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:24.888 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:24.888 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:24.888 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:24.888 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:24.888 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:24.888 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:24.889 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:24.889 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:24.889 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:24.890 list of memzone associated elements. size: 607.928894 MiB 00:05:24.890 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:24.890 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:24.890 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:24.890 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:24.890 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:24.890 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_69947_0 00:05:24.890 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:24.890 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69947_0 00:05:24.890 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:24.890 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69947_0 00:05:24.890 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:24.891 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:24.891 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:24.891 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:24.891 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:24.891 associated memzone info: size: 3.000122 MiB name: MP_evtpool_69947_0 00:05:24.891 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:24.891 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69947 00:05:24.891 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:24.891 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69947 00:05:24.891 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:24.891 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:24.891 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:24.891 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:24.891 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:24.891 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:24.891 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:24.891 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:24.891 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:24.891 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69947 00:05:24.891 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:24.891 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69947 00:05:24.891 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:24.891 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69947 00:05:24.891 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:24.891 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69947 00:05:24.891 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:24.891 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69947 00:05:24.891 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:24.891 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69947 00:05:24.891 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:24.891 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:24.891 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:24.891 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:24.891 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:24.891 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:24.891 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:24.891 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_69947 00:05:24.891 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:24.891 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69947 00:05:24.891 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:24.891 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:24.891 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:24.891 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:24.891 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:24.891 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69947 00:05:24.891 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:24.891 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:24.891 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:24.891 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69947 00:05:24.891 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:24.891 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69947 00:05:24.891 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:24.891 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69947 00:05:24.891 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:24.891 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:24.891 04:54:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:24.891 04:54:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69947 00:05:24.891 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 69947 ']' 00:05:24.891 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 69947 00:05:24.891 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:24.891 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.891 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69947 00:05:24.891 killing process with pid 69947 00:05:24.891 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.891 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.891 04:54:53 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69947' 00:05:24.891 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 69947 00:05:24.891 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 69947 00:05:25.152 ************************************ 00:05:25.152 END TEST dpdk_mem_utility 00:05:25.152 ************************************ 00:05:25.152 00:05:25.152 real 0m1.575s 00:05:25.152 user 0m1.569s 00:05:25.152 sys 0m0.455s 00:05:25.152 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.152 04:54:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.152 04:54:54 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.152 04:54:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.152 04:54:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.152 04:54:54 -- common/autotest_common.sh@10 -- # set +x 00:05:25.152 ************************************ 00:05:25.152 START TEST event 00:05:25.152 ************************************ 00:05:25.152 04:54:54 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.413 * Looking for test storage... 00:05:25.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.413 04:54:54 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.413 04:54:54 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.413 04:54:54 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.413 04:54:54 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.413 04:54:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.413 04:54:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.413 04:54:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.413 04:54:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.413 04:54:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.413 04:54:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.413 04:54:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.413 04:54:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.413 04:54:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.413 04:54:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.413 04:54:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.413 04:54:54 event -- scripts/common.sh@344 -- # case "$op" in 00:05:25.413 04:54:54 event -- scripts/common.sh@345 -- # : 1 00:05:25.413 04:54:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.413 04:54:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.413 04:54:54 event -- scripts/common.sh@365 -- # decimal 1 00:05:25.413 04:54:54 event -- scripts/common.sh@353 -- # local d=1 00:05:25.413 04:54:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.413 04:54:54 event -- scripts/common.sh@355 -- # echo 1 00:05:25.413 04:54:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.413 04:54:54 event -- scripts/common.sh@366 -- # decimal 2 00:05:25.413 04:54:54 event -- scripts/common.sh@353 -- # local d=2 00:05:25.414 04:54:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.414 04:54:54 event -- scripts/common.sh@355 -- # echo 2 00:05:25.414 04:54:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.414 04:54:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.414 04:54:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.414 04:54:54 event -- scripts/common.sh@368 -- # return 0 00:05:25.414 04:54:54 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.414 04:54:54 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.414 --rc genhtml_branch_coverage=1 00:05:25.414 --rc genhtml_function_coverage=1 00:05:25.414 --rc genhtml_legend=1 00:05:25.414 --rc geninfo_all_blocks=1 00:05:25.414 --rc geninfo_unexecuted_blocks=1 00:05:25.414 00:05:25.414 ' 00:05:25.414 04:54:54 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.414 --rc genhtml_branch_coverage=1 00:05:25.414 --rc genhtml_function_coverage=1 00:05:25.414 --rc genhtml_legend=1 00:05:25.414 --rc geninfo_all_blocks=1 00:05:25.414 --rc geninfo_unexecuted_blocks=1 00:05:25.414 00:05:25.414 ' 00:05:25.414 04:54:54 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.414 --rc genhtml_branch_coverage=1 00:05:25.414 --rc genhtml_function_coverage=1 00:05:25.414 --rc genhtml_legend=1 00:05:25.414 --rc geninfo_all_blocks=1 00:05:25.414 --rc geninfo_unexecuted_blocks=1 00:05:25.414 00:05:25.414 ' 00:05:25.414 04:54:54 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.414 --rc genhtml_branch_coverage=1 00:05:25.414 --rc genhtml_function_coverage=1 00:05:25.414 --rc genhtml_legend=1 00:05:25.414 --rc geninfo_all_blocks=1 00:05:25.414 --rc geninfo_unexecuted_blocks=1 00:05:25.414 00:05:25.414 ' 00:05:25.414 04:54:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:25.414 04:54:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:25.414 04:54:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.414 04:54:54 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:25.414 04:54:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.414 04:54:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.414 ************************************ 00:05:25.414 START TEST event_perf 00:05:25.414 ************************************ 00:05:25.414 04:54:54 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.414 Running I/O for 1 seconds...[2024-11-28 04:54:54.609879] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:25.414 [2024-11-28 04:54:54.610292] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70028 ] 00:05:25.675 [2024-11-28 04:54:54.768606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.675 [2024-11-28 04:54:54.802119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.675 Running I/O for 1 seconds...[2024-11-28 04:54:54.802400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.675 [2024-11-28 04:54:54.802602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.675 [2024-11-28 04:54:54.802736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.716 00:05:26.716 lcore 0: 135859 00:05:26.716 lcore 1: 135859 00:05:26.716 lcore 2: 135862 00:05:26.716 lcore 3: 135859 00:05:26.716 done. 00:05:26.716 00:05:26.716 real 0m1.292s 00:05:26.716 user 0m4.070s 00:05:26.716 sys 0m0.098s 00:05:26.716 04:54:55 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.716 ************************************ 00:05:26.716 END TEST event_perf 00:05:26.716 ************************************ 00:05:26.716 04:54:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.716 04:54:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:26.716 04:54:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:26.716 04:54:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.716 04:54:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.716 ************************************ 00:05:26.716 START TEST event_reactor 00:05:26.716 ************************************ 00:05:26.716 04:54:55 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:26.716 [2024-11-28 04:54:55.968046] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:26.716 [2024-11-28 04:54:55.968388] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70067 ] 00:05:26.978 [2024-11-28 04:54:56.116387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.978 [2024-11-28 04:54:56.145813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.922 test_start 00:05:27.922 oneshot 00:05:27.922 tick 100 00:05:27.922 tick 100 00:05:27.922 tick 250 00:05:27.922 tick 100 00:05:27.922 tick 100 00:05:27.922 tick 100 00:05:27.922 tick 250 00:05:27.922 tick 500 00:05:27.922 tick 100 00:05:27.922 tick 100 00:05:27.922 tick 250 00:05:27.922 tick 100 00:05:27.922 tick 100 00:05:27.922 test_end 00:05:27.922 ************************************ 00:05:27.922 END TEST event_reactor 00:05:27.922 ************************************ 00:05:27.922 00:05:27.922 real 0m1.263s 00:05:27.922 user 0m1.081s 00:05:27.922 sys 0m0.071s 00:05:27.922 04:54:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.922 04:54:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:28.183 04:54:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.183 04:54:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:28.183 04:54:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.183 04:54:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.183 ************************************ 00:05:28.183 START TEST event_reactor_perf 00:05:28.183 ************************************ 00:05:28.183 04:54:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.183 [2024-11-28 04:54:57.303059] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:28.183 [2024-11-28 04:54:57.303383] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70098 ] 00:05:28.183 [2024-11-28 04:54:57.449805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.445 [2024-11-28 04:54:57.477837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.389 test_start 00:05:29.389 test_end 00:05:29.389 Performance: 306641 events per second 00:05:29.389 00:05:29.389 real 0m1.257s 00:05:29.389 user 0m1.089s 00:05:29.389 sys 0m0.059s 00:05:29.389 ************************************ 00:05:29.389 END TEST event_reactor_perf 00:05:29.389 ************************************ 00:05:29.389 04:54:58 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.389 04:54:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.389 04:54:58 event -- event/event.sh@49 -- # uname -s 00:05:29.389 04:54:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:29.389 04:54:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.389 04:54:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.389 04:54:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.389 04:54:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.389 ************************************ 00:05:29.389 START TEST event_scheduler 00:05:29.389 ************************************ 00:05:29.389 04:54:58 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.651 * Looking for test storage... 00:05:29.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:29.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.651 04:54:58 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.651 --rc genhtml_branch_coverage=1 00:05:29.651 --rc genhtml_function_coverage=1 00:05:29.651 --rc genhtml_legend=1 00:05:29.651 --rc geninfo_all_blocks=1 00:05:29.651 --rc geninfo_unexecuted_blocks=1 00:05:29.651 00:05:29.651 ' 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.651 --rc genhtml_branch_coverage=1 00:05:29.651 --rc genhtml_function_coverage=1 00:05:29.651 --rc genhtml_legend=1 00:05:29.651 --rc geninfo_all_blocks=1 00:05:29.651 --rc geninfo_unexecuted_blocks=1 00:05:29.651 00:05:29.651 ' 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.651 --rc genhtml_branch_coverage=1 00:05:29.651 --rc genhtml_function_coverage=1 00:05:29.651 --rc genhtml_legend=1 00:05:29.651 --rc geninfo_all_blocks=1 00:05:29.651 --rc geninfo_unexecuted_blocks=1 00:05:29.651 00:05:29.651 ' 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.651 --rc genhtml_branch_coverage=1 00:05:29.651 --rc genhtml_function_coverage=1 00:05:29.651 --rc genhtml_legend=1 00:05:29.651 --rc geninfo_all_blocks=1 00:05:29.651 --rc geninfo_unexecuted_blocks=1 00:05:29.651 00:05:29.651 ' 00:05:29.651 04:54:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:29.651 04:54:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70169 00:05:29.651 04:54:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.651 04:54:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70169 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70169 ']' 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.651 04:54:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.651 04:54:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:29.651 [2024-11-28 04:54:58.824850] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:29.651 [2024-11-28 04:54:58.825238] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70169 ] 00:05:29.911 [2024-11-28 04:54:58.968343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.911 [2024-11-28 04:54:59.004093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.911 [2024-11-28 04:54:59.004525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.911 [2024-11-28 04:54:59.004779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.911 [2024-11-28 04:54:59.004918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.483 04:54:59 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.483 04:54:59 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:30.483 04:54:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:30.483 04:54:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.483 04:54:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.483 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.483 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.483 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.483 POWER: Cannot set governor of lcore 0 to performance 00:05:30.483 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.483 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.483 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:30.483 POWER: Unable to set Power Management Environment for lcore 0 00:05:30.483 [2024-11-28 04:54:59.715825] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:30.483 [2024-11-28 04:54:59.715874] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:30.483 [2024-11-28 04:54:59.715917] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:30.483 [2024-11-28 04:54:59.716027] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:30.483 [2024-11-28 04:54:59.716053] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:30.483 [2024-11-28 04:54:59.716076] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:30.483 04:54:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.483 04:54:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:30.483 04:54:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.483 04:54:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 [2024-11-28 04:54:59.842924] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:30.746 04:54:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.746 04:54:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:30.746 04:54:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.746 04:54:59 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.746 04:54:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 ************************************ 00:05:30.746 START TEST scheduler_create_thread 00:05:30.746 ************************************ 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 2 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 3 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 4 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 5 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 6 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 7 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 8 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 9 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 10 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.746 04:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.687 04:55:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.687 04:55:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:31.687 04:55:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.687 04:55:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.070 04:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.070 04:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:33.070 04:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:33.070 04:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.070 04:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.012 ************************************ 00:05:34.012 END TEST scheduler_create_thread 00:05:34.012 ************************************ 00:05:34.012 04:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.012 00:05:34.012 real 0m3.374s 00:05:34.012 user 0m0.012s 00:05:34.012 sys 0m0.008s 00:05:34.012 04:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.012 04:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.012 04:55:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:34.012 04:55:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70169 00:05:34.012 04:55:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70169 ']' 00:05:34.012 04:55:03 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70169 00:05:34.012 04:55:03 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:34.012 04:55:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.012 04:55:03 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70169 00:05:34.273 killing process with pid 70169 00:05:34.273 04:55:03 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:34.273 04:55:03 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:34.273 04:55:03 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70169' 00:05:34.273 04:55:03 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70169 00:05:34.273 04:55:03 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70169 00:05:34.533 [2024-11-28 04:55:03.615876] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.792 00:05:34.792 real 0m5.225s 00:05:34.792 user 0m10.474s 00:05:34.792 sys 0m0.411s 00:05:34.792 04:55:03 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.792 04:55:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.792 ************************************ 00:05:34.792 END TEST event_scheduler 00:05:34.792 ************************************ 00:05:34.792 04:55:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.792 04:55:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.792 04:55:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.792 04:55:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.792 04:55:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.792 ************************************ 00:05:34.792 START TEST app_repeat 00:05:34.792 ************************************ 00:05:34.792 04:55:03 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70275 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70275' 00:05:34.792 Process app_repeat pid: 70275 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.792 spdk_app_start Round 0 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.792 04:55:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70275 /var/tmp/spdk-nbd.sock 00:05:34.792 04:55:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70275 ']' 00:05:34.792 04:55:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.792 04:55:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.792 04:55:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.792 04:55:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.792 04:55:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.792 [2024-11-28 04:55:03.931304] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:34.792 [2024-11-28 04:55:03.931410] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70275 ] 00:05:34.792 [2024-11-28 04:55:04.065394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.050 [2024-11-28 04:55:04.088238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.050 [2024-11-28 04:55:04.088380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.616 04:55:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.616 04:55:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.616 04:55:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.883 Malloc0 00:05:35.883 04:55:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.883 Malloc1 00:05:35.883 04:55:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.883 04:55:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.883 04:55:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.884 04:55:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.148 /dev/nbd0 00:05:36.148 04:55:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.148 04:55:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.148 1+0 records in 00:05:36.148 1+0 records out 00:05:36.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317419 s, 12.9 MB/s 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.148 04:55:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.148 04:55:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.148 04:55:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.148 04:55:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.407 /dev/nbd1 00:05:36.407 04:55:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.407 04:55:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.407 1+0 records in 00:05:36.407 1+0 records out 00:05:36.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000131668 s, 31.1 MB/s 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.407 04:55:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.407 04:55:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.407 04:55:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.407 04:55:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.407 04:55:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.407 04:55:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.666 { 00:05:36.666 "nbd_device": "/dev/nbd0", 00:05:36.666 "bdev_name": "Malloc0" 00:05:36.666 }, 00:05:36.666 { 00:05:36.666 "nbd_device": "/dev/nbd1", 00:05:36.666 "bdev_name": "Malloc1" 00:05:36.666 } 00:05:36.666 ]' 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.666 { 00:05:36.666 "nbd_device": "/dev/nbd0", 00:05:36.666 "bdev_name": "Malloc0" 00:05:36.666 }, 00:05:36.666 { 00:05:36.666 "nbd_device": "/dev/nbd1", 00:05:36.666 "bdev_name": "Malloc1" 00:05:36.666 } 00:05:36.666 ]' 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.666 /dev/nbd1' 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.666 /dev/nbd1' 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.666 256+0 records in 00:05:36.666 256+0 records out 00:05:36.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00790192 s, 133 MB/s 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.666 256+0 records in 00:05:36.666 256+0 records out 00:05:36.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167931 s, 62.4 MB/s 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.666 256+0 records in 00:05:36.666 256+0 records out 00:05:36.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166787 s, 62.9 MB/s 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.666 04:55:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.925 04:55:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.925 04:55:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.925 04:55:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.925 04:55:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.925 04:55:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.925 04:55:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.925 04:55:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.925 04:55:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.925 04:55:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.925 04:55:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.184 04:55:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.184 04:55:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.184 04:55:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.184 04:55:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.184 04:55:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.184 04:55:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.184 04:55:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.184 04:55:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.184 04:55:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.184 04:55:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.184 04:55:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.443 04:55:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.443 04:55:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.443 04:55:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.443 04:55:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.443 04:55:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.443 04:55:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.443 04:55:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.443 04:55:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.443 04:55:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.443 04:55:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.443 04:55:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.443 04:55:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.443 04:55:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.702 04:55:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.702 [2024-11-28 04:55:06.810744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.702 [2024-11-28 04:55:06.826447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.702 [2024-11-28 04:55:06.826449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.702 [2024-11-28 04:55:06.854663] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.702 [2024-11-28 04:55:06.854713] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.986 04:55:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.986 spdk_app_start Round 1 00:05:40.986 04:55:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:40.986 04:55:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70275 /var/tmp/spdk-nbd.sock 00:05:40.986 04:55:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70275 ']' 00:05:40.986 04:55:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.986 04:55:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.986 04:55:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.986 04:55:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.986 04:55:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.986 04:55:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.986 04:55:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.986 04:55:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.986 Malloc0 00:05:40.986 04:55:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.986 Malloc1 00:05:40.986 04:55:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.986 04:55:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.244 /dev/nbd0 00:05:41.244 04:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.244 04:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.244 1+0 records in 00:05:41.244 1+0 records out 00:05:41.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164813 s, 24.9 MB/s 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.244 04:55:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.244 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.244 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.244 04:55:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.502 /dev/nbd1 00:05:41.502 04:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.502 04:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.502 1+0 records in 00:05:41.502 1+0 records out 00:05:41.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000151419 s, 27.1 MB/s 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.502 04:55:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.502 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.502 04:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.502 04:55:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.502 04:55:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.502 04:55:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.760 04:55:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.760 { 00:05:41.760 "nbd_device": "/dev/nbd0", 00:05:41.760 "bdev_name": "Malloc0" 00:05:41.760 }, 00:05:41.760 { 00:05:41.760 "nbd_device": "/dev/nbd1", 00:05:41.760 "bdev_name": "Malloc1" 00:05:41.760 } 00:05:41.760 ]' 00:05:41.760 04:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.760 04:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.760 { 00:05:41.760 "nbd_device": "/dev/nbd0", 00:05:41.760 "bdev_name": "Malloc0" 00:05:41.760 }, 00:05:41.760 { 00:05:41.760 "nbd_device": "/dev/nbd1", 00:05:41.760 "bdev_name": "Malloc1" 00:05:41.760 } 00:05:41.760 ]' 00:05:41.760 04:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.760 /dev/nbd1' 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.761 /dev/nbd1' 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.761 256+0 records in 00:05:41.761 256+0 records out 00:05:41.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00978505 s, 107 MB/s 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.761 256+0 records in 00:05:41.761 256+0 records out 00:05:41.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123362 s, 85.0 MB/s 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.761 256+0 records in 00:05:41.761 256+0 records out 00:05:41.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018086 s, 58.0 MB/s 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.761 04:55:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.020 04:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.020 04:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.020 04:55:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.020 04:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.020 04:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.020 04:55:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.020 04:55:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.020 04:55:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.020 04:55:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.020 04:55:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.278 04:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.278 04:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.278 04:55:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.278 04:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.278 04:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.278 04:55:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.278 04:55:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.278 04:55:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.278 04:55:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.278 04:55:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.278 04:55:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.536 04:55:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.536 04:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.536 04:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.536 04:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.536 04:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.536 04:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.536 04:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.536 04:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.536 04:55:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.536 04:55:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.536 04:55:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.536 04:55:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.536 04:55:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.794 04:55:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.794 [2024-11-28 04:55:11.924824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.794 [2024-11-28 04:55:11.940596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.794 [2024-11-28 04:55:11.940601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.794 [2024-11-28 04:55:11.969240] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.794 [2024-11-28 04:55:11.969287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.076 spdk_app_start Round 2 00:05:46.076 04:55:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.076 04:55:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:46.076 04:55:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70275 /var/tmp/spdk-nbd.sock 00:05:46.076 04:55:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70275 ']' 00:05:46.076 04:55:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.076 04:55:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.076 04:55:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.076 04:55:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.076 04:55:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.076 04:55:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.076 04:55:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:46.076 04:55:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.076 Malloc0 00:05:46.076 04:55:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.364 Malloc1 00:05:46.364 04:55:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.364 04:55:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.652 /dev/nbd0 00:05:46.652 04:55:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.653 04:55:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.653 1+0 records in 00:05:46.653 1+0 records out 00:05:46.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152121 s, 26.9 MB/s 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.653 04:55:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.653 04:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.653 04:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.653 04:55:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.914 /dev/nbd1 00:05:46.914 04:55:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.914 04:55:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.914 1+0 records in 00:05:46.914 1+0 records out 00:05:46.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247132 s, 16.6 MB/s 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.914 04:55:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.914 04:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.914 04:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.914 04:55:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.914 04:55:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.914 04:55:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.914 04:55:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.914 { 00:05:46.914 "nbd_device": "/dev/nbd0", 00:05:46.914 "bdev_name": "Malloc0" 00:05:46.914 }, 00:05:46.914 { 00:05:46.914 "nbd_device": "/dev/nbd1", 00:05:46.914 "bdev_name": "Malloc1" 00:05:46.914 } 00:05:46.914 ]' 00:05:46.914 04:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.914 { 00:05:46.914 "nbd_device": "/dev/nbd0", 00:05:46.914 "bdev_name": "Malloc0" 00:05:46.914 }, 00:05:46.914 { 00:05:46.914 "nbd_device": "/dev/nbd1", 00:05:46.914 "bdev_name": "Malloc1" 00:05:46.914 } 00:05:46.914 ]' 00:05:46.914 04:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.173 /dev/nbd1' 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.173 /dev/nbd1' 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.173 256+0 records in 00:05:47.173 256+0 records out 00:05:47.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684207 s, 153 MB/s 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.173 256+0 records in 00:05:47.173 256+0 records out 00:05:47.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160139 s, 65.5 MB/s 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.173 256+0 records in 00:05:47.173 256+0 records out 00:05:47.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151958 s, 69.0 MB/s 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.173 04:55:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.431 04:55:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.432 04:55:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.690 04:55:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.690 04:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.690 04:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.690 04:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.690 04:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.690 04:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.690 04:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.690 04:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.690 04:55:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.690 04:55:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.690 04:55:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.690 04:55:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.690 04:55:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.950 04:55:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.950 [2024-11-28 04:55:17.213456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.950 [2024-11-28 04:55:17.230297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.950 [2024-11-28 04:55:17.230324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.209 [2024-11-28 04:55:17.259088] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.209 [2024-11-28 04:55:17.259142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.497 04:55:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70275 /var/tmp/spdk-nbd.sock 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70275 ']' 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:51.497 04:55:20 event.app_repeat -- event/event.sh@39 -- # killprocess 70275 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70275 ']' 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70275 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70275 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.497 killing process with pid 70275 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70275' 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70275 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70275 00:05:51.497 spdk_app_start is called in Round 0. 00:05:51.497 Shutdown signal received, stop current app iteration 00:05:51.497 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:05:51.497 spdk_app_start is called in Round 1. 00:05:51.497 Shutdown signal received, stop current app iteration 00:05:51.497 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:05:51.497 spdk_app_start is called in Round 2. 00:05:51.497 Shutdown signal received, stop current app iteration 00:05:51.497 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:05:51.497 spdk_app_start is called in Round 3. 00:05:51.497 Shutdown signal received, stop current app iteration 00:05:51.497 04:55:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.497 04:55:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:51.497 00:05:51.497 real 0m16.585s 00:05:51.497 user 0m36.917s 00:05:51.497 sys 0m2.100s 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.497 04:55:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.497 ************************************ 00:05:51.497 END TEST app_repeat 00:05:51.497 ************************************ 00:05:51.497 04:55:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.497 04:55:20 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.497 04:55:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.497 04:55:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.497 04:55:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.497 ************************************ 00:05:51.497 START TEST cpu_locks 00:05:51.497 ************************************ 00:05:51.497 04:55:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.497 * Looking for test storage... 00:05:51.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.497 04:55:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.497 04:55:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.497 04:55:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.497 04:55:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.497 04:55:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:51.497 04:55:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.497 04:55:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.497 --rc genhtml_branch_coverage=1 00:05:51.497 --rc genhtml_function_coverage=1 00:05:51.497 --rc genhtml_legend=1 00:05:51.497 --rc geninfo_all_blocks=1 00:05:51.498 --rc geninfo_unexecuted_blocks=1 00:05:51.498 00:05:51.498 ' 00:05:51.498 04:55:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.498 --rc genhtml_branch_coverage=1 00:05:51.498 --rc genhtml_function_coverage=1 00:05:51.498 --rc genhtml_legend=1 00:05:51.498 --rc geninfo_all_blocks=1 00:05:51.498 --rc geninfo_unexecuted_blocks=1 00:05:51.498 00:05:51.498 ' 00:05:51.498 04:55:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.498 --rc genhtml_branch_coverage=1 00:05:51.498 --rc genhtml_function_coverage=1 00:05:51.498 --rc genhtml_legend=1 00:05:51.498 --rc geninfo_all_blocks=1 00:05:51.498 --rc geninfo_unexecuted_blocks=1 00:05:51.498 00:05:51.498 ' 00:05:51.498 04:55:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.498 --rc genhtml_branch_coverage=1 00:05:51.498 --rc genhtml_function_coverage=1 00:05:51.498 --rc genhtml_legend=1 00:05:51.498 --rc geninfo_all_blocks=1 00:05:51.498 --rc geninfo_unexecuted_blocks=1 00:05:51.498 00:05:51.498 ' 00:05:51.498 04:55:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.498 04:55:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.498 04:55:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.498 04:55:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.498 04:55:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.498 04:55:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.498 04:55:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.498 ************************************ 00:05:51.498 START TEST default_locks 00:05:51.498 ************************************ 00:05:51.498 04:55:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:51.498 04:55:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70694 00:05:51.498 04:55:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70694 00:05:51.498 04:55:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70694 ']' 00:05:51.498 04:55:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.498 04:55:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.498 04:55:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.498 04:55:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.498 04:55:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.498 04:55:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.498 [2024-11-28 04:55:20.749200] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:51.498 [2024-11-28 04:55:20.749317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70694 ] 00:05:51.757 [2024-11-28 04:55:20.886601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.757 [2024-11-28 04:55:20.902812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.322 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.322 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:52.322 04:55:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70694 00:05:52.322 04:55:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70694 00:05:52.322 04:55:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.580 04:55:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70694 00:05:52.580 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70694 ']' 00:05:52.580 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70694 00:05:52.580 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:52.580 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.580 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70694 00:05:52.580 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.580 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.580 killing process with pid 70694 00:05:52.580 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70694' 00:05:52.580 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70694 00:05:52.580 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70694 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70694 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70694 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70694 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70694 ']' 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.840 ERROR: process (pid: 70694) is no longer running 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.840 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70694) - No such process 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.840 00:05:52.840 real 0m1.293s 00:05:52.840 user 0m1.315s 00:05:52.840 sys 0m0.374s 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.840 04:55:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.840 ************************************ 00:05:52.840 END TEST default_locks 00:05:52.840 ************************************ 00:05:52.840 04:55:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:52.840 04:55:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.840 04:55:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.840 04:55:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.840 ************************************ 00:05:52.840 START TEST default_locks_via_rpc 00:05:52.840 ************************************ 00:05:52.840 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:52.840 04:55:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70742 00:05:52.840 04:55:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70742 00:05:52.840 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70742 ']' 00:05:52.840 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.840 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.840 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.840 04:55:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.840 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.840 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.840 [2024-11-28 04:55:22.104473] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:52.840 [2024-11-28 04:55:22.104602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70742 ] 00:05:53.100 [2024-11-28 04:55:22.243935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.100 [2024-11-28 04:55:22.260087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.667 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.925 04:55:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.925 04:55:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70742 00:05:53.925 04:55:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70742 00:05:53.925 04:55:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.925 04:55:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70742 00:05:53.925 04:55:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70742 ']' 00:05:53.925 04:55:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70742 00:05:53.925 04:55:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:53.925 04:55:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.925 04:55:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70742 00:05:53.925 04:55:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.925 04:55:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.925 killing process with pid 70742 00:05:53.925 04:55:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70742' 00:05:53.925 04:55:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70742 00:05:53.925 04:55:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70742 00:05:54.184 00:05:54.184 real 0m1.341s 00:05:54.184 user 0m1.417s 00:05:54.184 sys 0m0.367s 00:05:54.184 04:55:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.184 ************************************ 00:05:54.184 END TEST default_locks_via_rpc 00:05:54.184 ************************************ 00:05:54.184 04:55:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.184 04:55:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:54.184 04:55:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.184 04:55:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.184 04:55:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.184 ************************************ 00:05:54.184 START TEST non_locking_app_on_locked_coremask 00:05:54.184 ************************************ 00:05:54.184 04:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:54.184 04:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70783 00:05:54.184 04:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70783 /var/tmp/spdk.sock 00:05:54.184 04:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70783 ']' 00:05:54.184 04:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.184 04:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.184 04:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.184 04:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.184 04:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.184 04:55:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.442 [2024-11-28 04:55:23.487548] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:54.442 [2024-11-28 04:55:23.487648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70783 ] 00:05:54.442 [2024-11-28 04:55:23.625323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.442 [2024-11-28 04:55:23.642490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.009 04:55:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.009 04:55:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.009 04:55:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70799 00:05:55.009 04:55:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70799 /var/tmp/spdk2.sock 00:05:55.009 04:55:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70799 ']' 00:05:55.009 04:55:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:55.009 04:55:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.009 04:55:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.009 04:55:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.009 04:55:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.009 04:55:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.267 [2024-11-28 04:55:24.358126] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:55.267 [2024-11-28 04:55:24.358543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70799 ] 00:05:55.267 [2024-11-28 04:55:24.505437] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.267 [2024-11-28 04:55:24.505478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.267 [2024-11-28 04:55:24.537679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.231 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.231 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.231 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70783 00:05:56.231 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70783 00:05:56.231 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.231 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70783 00:05:56.231 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70783 ']' 00:05:56.231 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70783 00:05:56.231 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.231 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.232 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70783 00:05:56.232 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.232 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.232 killing process with pid 70783 00:05:56.232 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70783' 00:05:56.232 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70783 00:05:56.232 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70783 00:05:56.795 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70799 00:05:56.795 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70799 ']' 00:05:56.795 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70799 00:05:56.795 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.795 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.795 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70799 00:05:56.795 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.795 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.795 killing process with pid 70799 00:05:56.795 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70799' 00:05:56.795 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70799 00:05:56.795 04:55:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70799 00:05:57.053 00:05:57.053 real 0m2.740s 00:05:57.053 user 0m3.051s 00:05:57.053 sys 0m0.675s 00:05:57.053 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.053 ************************************ 00:05:57.053 END TEST non_locking_app_on_locked_coremask 00:05:57.053 ************************************ 00:05:57.053 04:55:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.053 04:55:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:57.053 04:55:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.053 04:55:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.053 04:55:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.053 ************************************ 00:05:57.053 START TEST locking_app_on_unlocked_coremask 00:05:57.053 ************************************ 00:05:57.053 04:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:57.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.053 04:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70857 00:05:57.053 04:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70857 /var/tmp/spdk.sock 00:05:57.053 04:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70857 ']' 00:05:57.053 04:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.053 04:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.053 04:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:57.053 04:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.053 04:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.053 04:55:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.053 [2024-11-28 04:55:26.298835] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:57.053 [2024-11-28 04:55:26.298955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70857 ] 00:05:57.311 [2024-11-28 04:55:26.439732] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.311 [2024-11-28 04:55:26.439770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.311 [2024-11-28 04:55:26.455981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.873 04:55:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.874 04:55:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.874 04:55:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70873 00:05:57.874 04:55:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70873 /var/tmp/spdk2.sock 00:05:57.874 04:55:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70873 ']' 00:05:57.874 04:55:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:57.874 04:55:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.874 04:55:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.874 04:55:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.874 04:55:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.874 04:55:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.130 [2024-11-28 04:55:27.190110] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:58.130 [2024-11-28 04:55:27.190248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70873 ] 00:05:58.130 [2024-11-28 04:55:27.339376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.130 [2024-11-28 04:55:27.371765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.060 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.060 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.060 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70873 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70873 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70857 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70857 ']' 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70857 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70857 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.061 killing process with pid 70857 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70857' 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70857 00:05:59.061 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70857 00:05:59.624 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70873 00:05:59.624 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70873 ']' 00:05:59.624 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70873 00:05:59.624 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.624 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.624 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70873 00:05:59.624 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.624 killing process with pid 70873 00:05:59.624 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.624 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70873' 00:05:59.624 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70873 00:05:59.624 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70873 00:05:59.881 00:05:59.881 real 0m2.765s 00:05:59.881 user 0m3.114s 00:05:59.881 sys 0m0.704s 00:05:59.881 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.881 04:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.881 ************************************ 00:05:59.881 END TEST locking_app_on_unlocked_coremask 00:05:59.881 ************************************ 00:05:59.881 04:55:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:59.881 04:55:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.881 04:55:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.881 04:55:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.881 ************************************ 00:05:59.881 START TEST locking_app_on_locked_coremask 00:05:59.881 ************************************ 00:05:59.881 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:59.882 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70931 00:05:59.882 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70931 /var/tmp/spdk.sock 00:05:59.882 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70931 ']' 00:05:59.882 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.882 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.882 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.882 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.882 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.882 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.882 [2024-11-28 04:55:29.098700] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:59.882 [2024-11-28 04:55:29.098818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70931 ] 00:06:00.138 [2024-11-28 04:55:29.239436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.138 [2024-11-28 04:55:29.255591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70941 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70941 /var/tmp/spdk2.sock 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70941 /var/tmp/spdk2.sock 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70941 /var/tmp/spdk2.sock 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70941 ']' 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.702 04:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.959 [2024-11-28 04:55:29.994475] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:00.959 [2024-11-28 04:55:29.994582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70941 ] 00:06:00.959 [2024-11-28 04:55:30.142320] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70931 has claimed it. 00:06:00.959 [2024-11-28 04:55:30.142369] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.523 ERROR: process (pid: 70941) is no longer running 00:06:01.523 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70941) - No such process 00:06:01.523 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.523 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:01.523 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:01.523 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.523 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.523 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.523 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70931 00:06:01.523 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.523 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70931 00:06:01.781 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70931 00:06:01.781 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70931 ']' 00:06:01.781 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70931 00:06:01.781 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.781 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.781 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70931 00:06:01.781 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.781 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.781 killing process with pid 70931 00:06:01.781 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70931' 00:06:01.781 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70931 00:06:01.781 04:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70931 00:06:02.038 00:06:02.038 real 0m2.049s 00:06:02.038 user 0m2.315s 00:06:02.038 sys 0m0.460s 00:06:02.038 04:55:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.038 04:55:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.038 ************************************ 00:06:02.038 END TEST locking_app_on_locked_coremask 00:06:02.038 ************************************ 00:06:02.038 04:55:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:02.038 04:55:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.038 04:55:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.038 04:55:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.038 ************************************ 00:06:02.038 START TEST locking_overlapped_coremask 00:06:02.038 ************************************ 00:06:02.038 04:55:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:02.038 04:55:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70989 00:06:02.038 04:55:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70989 /var/tmp/spdk.sock 00:06:02.038 04:55:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70989 ']' 00:06:02.038 04:55:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.038 04:55:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.038 04:55:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.038 04:55:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:02.038 04:55:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.038 04:55:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.038 [2024-11-28 04:55:31.190424] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:02.038 [2024-11-28 04:55:31.190541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70989 ] 00:06:02.296 [2024-11-28 04:55:31.331953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.296 [2024-11-28 04:55:31.350465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.296 [2024-11-28 04:55:31.350617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.296 [2024-11-28 04:55:31.350679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.861 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.861 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.861 04:55:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:02.861 04:55:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71007 00:06:02.861 04:55:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71007 /var/tmp/spdk2.sock 00:06:02.861 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:02.862 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71007 /var/tmp/spdk2.sock 00:06:02.862 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:02.862 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.862 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:02.862 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.862 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71007 /var/tmp/spdk2.sock 00:06:02.862 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71007 ']' 00:06:02.862 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.862 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.862 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.862 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.862 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.862 [2024-11-28 04:55:32.095717] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:02.862 [2024-11-28 04:55:32.095837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71007 ] 00:06:03.120 [2024-11-28 04:55:32.256047] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70989 has claimed it. 00:06:03.120 [2024-11-28 04:55:32.256108] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.686 ERROR: process (pid: 71007) is no longer running 00:06:03.686 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71007) - No such process 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70989 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 70989 ']' 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 70989 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70989 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.686 killing process with pid 70989 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70989' 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 70989 00:06:03.686 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 70989 00:06:03.945 00:06:03.945 real 0m1.861s 00:06:03.945 user 0m5.217s 00:06:03.945 sys 0m0.356s 00:06:03.945 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.945 04:55:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.945 ************************************ 00:06:03.945 END TEST locking_overlapped_coremask 00:06:03.945 ************************************ 00:06:03.945 04:55:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:03.945 04:55:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.945 04:55:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.945 04:55:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.945 ************************************ 00:06:03.945 START TEST locking_overlapped_coremask_via_rpc 00:06:03.945 ************************************ 00:06:03.945 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:03.945 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71043 00:06:03.945 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71043 /var/tmp/spdk.sock 00:06:03.945 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71043 ']' 00:06:03.945 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.945 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.945 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.945 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.945 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.945 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:03.945 [2024-11-28 04:55:33.092216] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:03.945 [2024-11-28 04:55:33.092336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71043 ] 00:06:04.245 [2024-11-28 04:55:33.231833] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.245 [2024-11-28 04:55:33.231869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.245 [2024-11-28 04:55:33.251349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.245 [2024-11-28 04:55:33.251468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.245 [2024-11-28 04:55:33.251543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.811 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.811 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:04.811 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71056 00:06:04.811 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71056 /var/tmp/spdk2.sock 00:06:04.811 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:04.811 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71056 ']' 00:06:04.811 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.811 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.811 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.811 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.811 04:55:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.811 [2024-11-28 04:55:33.990881] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:04.811 [2024-11-28 04:55:33.991282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71056 ] 00:06:05.069 [2024-11-28 04:55:34.154248] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.069 [2024-11-28 04:55:34.154293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.069 [2024-11-28 04:55:34.200140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.069 [2024-11-28 04:55:34.200270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.069 [2024-11-28 04:55:34.200347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.635 [2024-11-28 04:55:34.851305] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71043 has claimed it. 00:06:05.635 request: 00:06:05.635 { 00:06:05.635 "method": "framework_enable_cpumask_locks", 00:06:05.635 "req_id": 1 00:06:05.635 } 00:06:05.635 Got JSON-RPC error response 00:06:05.635 response: 00:06:05.635 { 00:06:05.635 "code": -32603, 00:06:05.635 "message": "Failed to claim CPU core: 2" 00:06:05.635 } 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71043 /var/tmp/spdk.sock 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71043 ']' 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.635 04:55:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.907 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.907 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.907 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71056 /var/tmp/spdk2.sock 00:06:05.907 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71056 ']' 00:06:05.907 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.907 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.907 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.907 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.907 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.173 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.173 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.173 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:06.173 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.173 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.173 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.173 00:06:06.173 real 0m2.257s 00:06:06.173 user 0m1.049s 00:06:06.173 sys 0m0.137s 00:06:06.173 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.173 04:55:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.173 ************************************ 00:06:06.173 END TEST locking_overlapped_coremask_via_rpc 00:06:06.173 ************************************ 00:06:06.173 04:55:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:06.173 04:55:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71043 ]] 00:06:06.173 04:55:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71043 00:06:06.173 04:55:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71043 ']' 00:06:06.173 04:55:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71043 00:06:06.173 04:55:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:06.173 04:55:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.173 04:55:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71043 00:06:06.173 04:55:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.173 killing process with pid 71043 00:06:06.173 04:55:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.173 04:55:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71043' 00:06:06.173 04:55:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71043 00:06:06.173 04:55:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71043 00:06:06.431 04:55:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71056 ]] 00:06:06.431 04:55:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71056 00:06:06.431 04:55:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71056 ']' 00:06:06.431 04:55:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71056 00:06:06.431 04:55:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:06.431 04:55:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.431 04:55:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71056 00:06:06.431 04:55:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:06.431 04:55:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:06.431 killing process with pid 71056 00:06:06.431 04:55:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71056' 00:06:06.431 04:55:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71056 00:06:06.431 04:55:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71056 00:06:06.690 04:55:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.690 04:55:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:06.690 04:55:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71043 ]] 00:06:06.690 04:55:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71043 00:06:06.690 04:55:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71043 ']' 00:06:06.690 04:55:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71043 00:06:06.690 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71043) - No such process 00:06:06.690 Process with pid 71043 is not found 00:06:06.690 04:55:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71043 is not found' 00:06:06.690 04:55:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71056 ]] 00:06:06.690 04:55:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71056 00:06:06.690 04:55:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71056 ']' 00:06:06.690 04:55:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71056 00:06:06.690 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71056) - No such process 00:06:06.690 Process with pid 71056 is not found 00:06:06.690 04:55:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71056 is not found' 00:06:06.690 04:55:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.690 00:06:06.690 real 0m15.306s 00:06:06.690 user 0m27.623s 00:06:06.690 sys 0m3.793s 00:06:06.690 04:55:35 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.690 04:55:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 ************************************ 00:06:06.690 END TEST cpu_locks 00:06:06.690 ************************************ 00:06:06.690 00:06:06.690 real 0m41.454s 00:06:06.690 user 1m21.427s 00:06:06.690 sys 0m6.781s 00:06:06.690 04:55:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.690 04:55:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 ************************************ 00:06:06.690 END TEST event 00:06:06.690 ************************************ 00:06:06.690 04:55:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:06.690 04:55:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.690 04:55:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.690 04:55:35 -- common/autotest_common.sh@10 -- # set +x 00:06:06.690 ************************************ 00:06:06.690 START TEST thread 00:06:06.690 ************************************ 00:06:06.690 04:55:35 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:06.690 * Looking for test storage... 00:06:06.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:06.690 04:55:35 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.690 04:55:35 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.690 04:55:35 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.949 04:55:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.949 04:55:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.949 04:55:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.949 04:55:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.949 04:55:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.949 04:55:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.949 04:55:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.949 04:55:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.949 04:55:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.949 04:55:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.949 04:55:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.949 04:55:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.949 04:55:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:06.949 04:55:36 thread -- scripts/common.sh@345 -- # : 1 00:06:06.949 04:55:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.949 04:55:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.949 04:55:36 thread -- scripts/common.sh@365 -- # decimal 1 00:06:06.949 04:55:36 thread -- scripts/common.sh@353 -- # local d=1 00:06:06.949 04:55:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.949 04:55:36 thread -- scripts/common.sh@355 -- # echo 1 00:06:06.949 04:55:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.949 04:55:36 thread -- scripts/common.sh@366 -- # decimal 2 00:06:06.949 04:55:36 thread -- scripts/common.sh@353 -- # local d=2 00:06:06.949 04:55:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.949 04:55:36 thread -- scripts/common.sh@355 -- # echo 2 00:06:06.949 04:55:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.949 04:55:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.949 04:55:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.949 04:55:36 thread -- scripts/common.sh@368 -- # return 0 00:06:06.949 04:55:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.949 04:55:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.949 --rc genhtml_branch_coverage=1 00:06:06.949 --rc genhtml_function_coverage=1 00:06:06.949 --rc genhtml_legend=1 00:06:06.949 --rc geninfo_all_blocks=1 00:06:06.949 --rc geninfo_unexecuted_blocks=1 00:06:06.949 00:06:06.949 ' 00:06:06.949 04:55:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.949 --rc genhtml_branch_coverage=1 00:06:06.949 --rc genhtml_function_coverage=1 00:06:06.949 --rc genhtml_legend=1 00:06:06.949 --rc geninfo_all_blocks=1 00:06:06.949 --rc geninfo_unexecuted_blocks=1 00:06:06.949 00:06:06.949 ' 00:06:06.949 04:55:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.949 --rc genhtml_branch_coverage=1 00:06:06.949 --rc genhtml_function_coverage=1 00:06:06.949 --rc genhtml_legend=1 00:06:06.949 --rc geninfo_all_blocks=1 00:06:06.949 --rc geninfo_unexecuted_blocks=1 00:06:06.949 00:06:06.949 ' 00:06:06.949 04:55:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.949 --rc genhtml_branch_coverage=1 00:06:06.949 --rc genhtml_function_coverage=1 00:06:06.949 --rc genhtml_legend=1 00:06:06.949 --rc geninfo_all_blocks=1 00:06:06.949 --rc geninfo_unexecuted_blocks=1 00:06:06.949 00:06:06.949 ' 00:06:06.949 04:55:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.949 04:55:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:06.949 04:55:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.949 04:55:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.949 ************************************ 00:06:06.949 START TEST thread_poller_perf 00:06:06.949 ************************************ 00:06:06.949 04:55:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.949 [2024-11-28 04:55:36.071306] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:06.949 [2024-11-28 04:55:36.071414] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71184 ] 00:06:06.949 [2024-11-28 04:55:36.219576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.208 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:07.208 [2024-11-28 04:55:36.237062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.140 [2024-11-28T04:55:37.424Z] ====================================== 00:06:08.140 [2024-11-28T04:55:37.424Z] busy:2611977358 (cyc) 00:06:08.140 [2024-11-28T04:55:37.424Z] total_run_count: 411000 00:06:08.140 [2024-11-28T04:55:37.424Z] tsc_hz: 2600000000 (cyc) 00:06:08.140 [2024-11-28T04:55:37.424Z] ====================================== 00:06:08.140 [2024-11-28T04:55:37.424Z] poller_cost: 6355 (cyc), 2444 (nsec) 00:06:08.140 00:06:08.140 real 0m1.237s 00:06:08.140 user 0m1.074s 00:06:08.140 sys 0m0.046s 00:06:08.140 04:55:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.140 04:55:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.140 ************************************ 00:06:08.140 END TEST thread_poller_perf 00:06:08.140 ************************************ 00:06:08.140 04:55:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.140 04:55:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:08.140 04:55:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.140 04:55:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.140 ************************************ 00:06:08.140 START TEST thread_poller_perf 00:06:08.140 ************************************ 00:06:08.140 04:55:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.140 [2024-11-28 04:55:37.341042] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:08.140 [2024-11-28 04:55:37.341160] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71225 ] 00:06:08.399 [2024-11-28 04:55:37.482494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.399 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:08.399 [2024-11-28 04:55:37.498668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.333 [2024-11-28T04:55:38.617Z] ====================================== 00:06:09.333 [2024-11-28T04:55:38.617Z] busy:2602555190 (cyc) 00:06:09.333 [2024-11-28T04:55:38.617Z] total_run_count: 5129000 00:06:09.333 [2024-11-28T04:55:38.617Z] tsc_hz: 2600000000 (cyc) 00:06:09.333 [2024-11-28T04:55:38.617Z] ====================================== 00:06:09.333 [2024-11-28T04:55:38.617Z] poller_cost: 507 (cyc), 195 (nsec) 00:06:09.333 00:06:09.333 real 0m1.221s 00:06:09.333 user 0m1.066s 00:06:09.333 sys 0m0.049s 00:06:09.333 04:55:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.333 ************************************ 00:06:09.333 END TEST thread_poller_perf 00:06:09.333 ************************************ 00:06:09.333 04:55:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.333 04:55:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:09.333 00:06:09.333 real 0m2.678s 00:06:09.333 user 0m2.253s 00:06:09.333 sys 0m0.210s 00:06:09.333 04:55:38 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.333 04:55:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.333 ************************************ 00:06:09.333 END TEST thread 00:06:09.333 ************************************ 00:06:09.333 04:55:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:09.333 04:55:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:09.333 04:55:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.333 04:55:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.333 04:55:38 -- common/autotest_common.sh@10 -- # set +x 00:06:09.333 ************************************ 00:06:09.333 START TEST app_cmdline 00:06:09.333 ************************************ 00:06:09.333 04:55:38 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:09.591 * Looking for test storage... 00:06:09.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.591 04:55:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.591 --rc genhtml_branch_coverage=1 00:06:09.591 --rc genhtml_function_coverage=1 00:06:09.591 --rc genhtml_legend=1 00:06:09.591 --rc geninfo_all_blocks=1 00:06:09.591 --rc geninfo_unexecuted_blocks=1 00:06:09.591 00:06:09.591 ' 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.591 --rc genhtml_branch_coverage=1 00:06:09.591 --rc genhtml_function_coverage=1 00:06:09.591 --rc genhtml_legend=1 00:06:09.591 --rc geninfo_all_blocks=1 00:06:09.591 --rc geninfo_unexecuted_blocks=1 00:06:09.591 00:06:09.591 ' 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.591 --rc genhtml_branch_coverage=1 00:06:09.591 --rc genhtml_function_coverage=1 00:06:09.591 --rc genhtml_legend=1 00:06:09.591 --rc geninfo_all_blocks=1 00:06:09.591 --rc geninfo_unexecuted_blocks=1 00:06:09.591 00:06:09.591 ' 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.591 --rc genhtml_branch_coverage=1 00:06:09.591 --rc genhtml_function_coverage=1 00:06:09.591 --rc genhtml_legend=1 00:06:09.591 --rc geninfo_all_blocks=1 00:06:09.591 --rc geninfo_unexecuted_blocks=1 00:06:09.591 00:06:09.591 ' 00:06:09.591 04:55:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:09.591 04:55:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71303 00:06:09.591 04:55:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71303 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71303 ']' 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.591 04:55:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:09.591 04:55:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.591 [2024-11-28 04:55:38.820597] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:09.591 [2024-11-28 04:55:38.820717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71303 ] 00:06:09.849 [2024-11-28 04:55:38.964924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.849 [2024-11-28 04:55:38.983370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.414 04:55:39 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.414 04:55:39 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:10.414 04:55:39 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:10.673 { 00:06:10.673 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:06:10.673 "fields": { 00:06:10.673 "major": 25, 00:06:10.673 "minor": 1, 00:06:10.673 "patch": 0, 00:06:10.673 "suffix": "-pre", 00:06:10.673 "commit": "35cd3e84d" 00:06:10.673 } 00:06:10.673 } 00:06:10.673 04:55:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:10.673 04:55:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:10.673 04:55:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:10.673 04:55:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:10.673 04:55:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:10.673 04:55:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.673 04:55:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.673 04:55:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:10.673 04:55:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:10.673 04:55:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:10.673 04:55:39 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.932 request: 00:06:10.932 { 00:06:10.932 "method": "env_dpdk_get_mem_stats", 00:06:10.932 "req_id": 1 00:06:10.932 } 00:06:10.932 Got JSON-RPC error response 00:06:10.932 response: 00:06:10.932 { 00:06:10.932 "code": -32601, 00:06:10.932 "message": "Method not found" 00:06:10.932 } 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.932 04:55:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71303 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71303 ']' 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71303 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71303 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71303' 00:06:10.932 killing process with pid 71303 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 71303 00:06:10.932 04:55:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 71303 00:06:11.190 00:06:11.190 real 0m1.718s 00:06:11.190 user 0m2.047s 00:06:11.190 sys 0m0.377s 00:06:11.190 04:55:40 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.190 ************************************ 00:06:11.190 END TEST app_cmdline 00:06:11.190 ************************************ 00:06:11.190 04:55:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.190 04:55:40 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:11.190 04:55:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.190 04:55:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.190 04:55:40 -- common/autotest_common.sh@10 -- # set +x 00:06:11.190 ************************************ 00:06:11.190 START TEST version 00:06:11.190 ************************************ 00:06:11.190 04:55:40 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:11.190 * Looking for test storage... 00:06:11.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:11.190 04:55:40 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.190 04:55:40 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.190 04:55:40 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.448 04:55:40 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.448 04:55:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.448 04:55:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.448 04:55:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.448 04:55:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.448 04:55:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.448 04:55:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.448 04:55:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.448 04:55:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.448 04:55:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.448 04:55:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.449 04:55:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.449 04:55:40 version -- scripts/common.sh@344 -- # case "$op" in 00:06:11.449 04:55:40 version -- scripts/common.sh@345 -- # : 1 00:06:11.449 04:55:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.449 04:55:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.449 04:55:40 version -- scripts/common.sh@365 -- # decimal 1 00:06:11.449 04:55:40 version -- scripts/common.sh@353 -- # local d=1 00:06:11.449 04:55:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.449 04:55:40 version -- scripts/common.sh@355 -- # echo 1 00:06:11.449 04:55:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.449 04:55:40 version -- scripts/common.sh@366 -- # decimal 2 00:06:11.449 04:55:40 version -- scripts/common.sh@353 -- # local d=2 00:06:11.449 04:55:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.449 04:55:40 version -- scripts/common.sh@355 -- # echo 2 00:06:11.449 04:55:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.449 04:55:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.449 04:55:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.449 04:55:40 version -- scripts/common.sh@368 -- # return 0 00:06:11.449 04:55:40 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.449 04:55:40 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.449 --rc genhtml_branch_coverage=1 00:06:11.449 --rc genhtml_function_coverage=1 00:06:11.449 --rc genhtml_legend=1 00:06:11.449 --rc geninfo_all_blocks=1 00:06:11.449 --rc geninfo_unexecuted_blocks=1 00:06:11.449 00:06:11.449 ' 00:06:11.449 04:55:40 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.449 --rc genhtml_branch_coverage=1 00:06:11.449 --rc genhtml_function_coverage=1 00:06:11.449 --rc genhtml_legend=1 00:06:11.449 --rc geninfo_all_blocks=1 00:06:11.449 --rc geninfo_unexecuted_blocks=1 00:06:11.449 00:06:11.449 ' 00:06:11.449 04:55:40 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.449 --rc genhtml_branch_coverage=1 00:06:11.449 --rc genhtml_function_coverage=1 00:06:11.449 --rc genhtml_legend=1 00:06:11.449 --rc geninfo_all_blocks=1 00:06:11.449 --rc geninfo_unexecuted_blocks=1 00:06:11.449 00:06:11.449 ' 00:06:11.449 04:55:40 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.449 --rc genhtml_branch_coverage=1 00:06:11.449 --rc genhtml_function_coverage=1 00:06:11.449 --rc genhtml_legend=1 00:06:11.449 --rc geninfo_all_blocks=1 00:06:11.449 --rc geninfo_unexecuted_blocks=1 00:06:11.449 00:06:11.449 ' 00:06:11.449 04:55:40 version -- app/version.sh@17 -- # get_header_version major 00:06:11.449 04:55:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.449 04:55:40 version -- app/version.sh@14 -- # cut -f2 00:06:11.449 04:55:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.449 04:55:40 version -- app/version.sh@17 -- # major=25 00:06:11.449 04:55:40 version -- app/version.sh@18 -- # get_header_version minor 00:06:11.449 04:55:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.449 04:55:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.449 04:55:40 version -- app/version.sh@14 -- # cut -f2 00:06:11.449 04:55:40 version -- app/version.sh@18 -- # minor=1 00:06:11.449 04:55:40 version -- app/version.sh@19 -- # get_header_version patch 00:06:11.449 04:55:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.449 04:55:40 version -- app/version.sh@14 -- # cut -f2 00:06:11.449 04:55:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.449 04:55:40 version -- app/version.sh@19 -- # patch=0 00:06:11.449 04:55:40 version -- app/version.sh@20 -- # get_header_version suffix 00:06:11.449 04:55:40 version -- app/version.sh@14 -- # cut -f2 00:06:11.449 04:55:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.449 04:55:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.449 04:55:40 version -- app/version.sh@20 -- # suffix=-pre 00:06:11.449 04:55:40 version -- app/version.sh@22 -- # version=25.1 00:06:11.449 04:55:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:11.449 04:55:40 version -- app/version.sh@28 -- # version=25.1rc0 00:06:11.449 04:55:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:11.449 04:55:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:11.449 04:55:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:11.449 04:55:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:11.449 00:06:11.449 real 0m0.185s 00:06:11.449 user 0m0.113s 00:06:11.449 sys 0m0.101s 00:06:11.449 04:55:40 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.449 04:55:40 version -- common/autotest_common.sh@10 -- # set +x 00:06:11.449 ************************************ 00:06:11.449 END TEST version 00:06:11.449 ************************************ 00:06:11.449 04:55:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:11.449 04:55:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:11.449 04:55:40 -- spdk/autotest.sh@194 -- # uname -s 00:06:11.449 04:55:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:11.449 04:55:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:11.449 04:55:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:11.449 04:55:40 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:11.449 04:55:40 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:11.449 04:55:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:11.449 04:55:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.449 04:55:40 -- common/autotest_common.sh@10 -- # set +x 00:06:11.449 ************************************ 00:06:11.449 START TEST blockdev_nvme 00:06:11.449 ************************************ 00:06:11.449 04:55:40 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:11.449 * Looking for test storage... 00:06:11.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:11.449 04:55:40 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.449 04:55:40 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.449 04:55:40 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.449 04:55:40 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.449 04:55:40 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:11.449 04:55:40 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.449 04:55:40 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.449 --rc genhtml_branch_coverage=1 00:06:11.449 --rc genhtml_function_coverage=1 00:06:11.449 --rc genhtml_legend=1 00:06:11.449 --rc geninfo_all_blocks=1 00:06:11.449 --rc geninfo_unexecuted_blocks=1 00:06:11.449 00:06:11.449 ' 00:06:11.449 04:55:40 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.449 --rc genhtml_branch_coverage=1 00:06:11.449 --rc genhtml_function_coverage=1 00:06:11.449 --rc genhtml_legend=1 00:06:11.449 --rc geninfo_all_blocks=1 00:06:11.449 --rc geninfo_unexecuted_blocks=1 00:06:11.449 00:06:11.449 ' 00:06:11.449 04:55:40 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.449 --rc genhtml_branch_coverage=1 00:06:11.449 --rc genhtml_function_coverage=1 00:06:11.449 --rc genhtml_legend=1 00:06:11.449 --rc geninfo_all_blocks=1 00:06:11.449 --rc geninfo_unexecuted_blocks=1 00:06:11.450 00:06:11.450 ' 00:06:11.450 04:55:40 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.450 --rc genhtml_branch_coverage=1 00:06:11.450 --rc genhtml_function_coverage=1 00:06:11.450 --rc genhtml_legend=1 00:06:11.450 --rc geninfo_all_blocks=1 00:06:11.450 --rc geninfo_unexecuted_blocks=1 00:06:11.450 00:06:11.450 ' 00:06:11.450 04:55:40 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:11.450 04:55:40 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:11.450 04:55:40 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:11.450 04:55:40 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:11.450 04:55:40 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:11.450 04:55:40 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:11.450 04:55:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:11.450 04:55:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:11.450 04:55:40 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:11.450 04:55:40 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:11.450 04:55:40 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:11.450 04:55:40 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71464 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71464 00:06:11.707 04:55:40 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71464 ']' 00:06:11.707 04:55:40 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:11.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.707 04:55:40 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.707 04:55:40 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.707 04:55:40 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.707 04:55:40 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.707 04:55:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.707 [2024-11-28 04:55:40.833396] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:11.707 [2024-11-28 04:55:40.833781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71464 ] 00:06:11.707 [2024-11-28 04:55:40.988172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.964 [2024-11-28 04:55:41.006341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.533 04:55:41 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.533 04:55:41 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:12.533 04:55:41 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:12.533 04:55:41 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:12.533 04:55:41 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:12.533 04:55:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:12.533 04:55:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:12.533 04:55:41 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:12.533 04:55:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.533 04:55:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.792 04:55:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.792 04:55:41 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:12.792 04:55:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.792 04:55:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.792 04:55:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.792 04:55:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:12.792 04:55:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:12.792 04:55:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.792 04:55:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.792 04:55:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.792 04:55:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:12.792 04:55:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.792 04:55:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.792 04:55:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.792 04:55:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:12.792 04:55:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.792 04:55:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.792 04:55:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.792 04:55:42 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:12.792 04:55:42 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:12.792 04:55:42 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:12.792 04:55:42 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.792 04:55:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.065 04:55:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.065 04:55:42 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:13.065 04:55:42 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:13.066 04:55:42 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7d1ed6e6-0607-4ff3-a34e-8721f0b0fc8f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7d1ed6e6-0607-4ff3-a34e-8721f0b0fc8f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f4cbe9e4-759c-4d05-a005-78e592ace941"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f4cbe9e4-759c-4d05-a005-78e592ace941",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "335091e0-19f7-4e1c-8686-5e88bba8ac53"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "335091e0-19f7-4e1c-8686-5e88bba8ac53",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e3e5f25c-441d-407e-b8c4-556d43a4e411"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3e5f25c-441d-407e-b8c4-556d43a4e411",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d393bd7f-8c8a-4536-96cc-f26009211053"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d393bd7f-8c8a-4536-96cc-f26009211053",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "69303651-3fa6-4d0a-9ee9-0f710454567b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "69303651-3fa6-4d0a-9ee9-0f710454567b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:13.066 04:55:42 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:13.066 04:55:42 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:13.066 04:55:42 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:13.066 04:55:42 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 71464 00:06:13.066 04:55:42 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71464 ']' 00:06:13.066 04:55:42 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71464 00:06:13.066 04:55:42 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:13.066 04:55:42 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.066 04:55:42 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71464 00:06:13.066 killing process with pid 71464 00:06:13.066 04:55:42 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.066 04:55:42 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.066 04:55:42 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71464' 00:06:13.066 04:55:42 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71464 00:06:13.066 04:55:42 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71464 00:06:13.361 04:55:42 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:13.361 04:55:42 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:13.361 04:55:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:13.361 04:55:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.361 04:55:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.361 ************************************ 00:06:13.361 START TEST bdev_hello_world 00:06:13.361 ************************************ 00:06:13.361 04:55:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:13.361 [2024-11-28 04:55:42.470782] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:13.361 [2024-11-28 04:55:42.471021] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71537 ] 00:06:13.361 [2024-11-28 04:55:42.616020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.361 [2024-11-28 04:55:42.635458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.928 [2024-11-28 04:55:43.004348] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:13.928 [2024-11-28 04:55:43.004391] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:13.928 [2024-11-28 04:55:43.004409] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:13.928 [2024-11-28 04:55:43.006471] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:13.928 [2024-11-28 04:55:43.006896] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:13.928 [2024-11-28 04:55:43.006917] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:13.928 [2024-11-28 04:55:43.007135] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:13.928 00:06:13.928 [2024-11-28 04:55:43.007156] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:13.928 00:06:13.928 real 0m0.736s 00:06:13.928 user 0m0.489s 00:06:13.928 sys 0m0.144s 00:06:13.928 04:55:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.928 ************************************ 00:06:13.928 END TEST bdev_hello_world 00:06:13.928 ************************************ 00:06:13.928 04:55:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:13.928 04:55:43 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:13.928 04:55:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:13.928 04:55:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.928 04:55:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.928 ************************************ 00:06:13.928 START TEST bdev_bounds 00:06:13.928 ************************************ 00:06:13.928 04:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:13.928 Process bdevio pid: 71557 00:06:13.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.928 04:55:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71557 00:06:13.928 04:55:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.928 04:55:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71557' 00:06:13.928 04:55:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71557 00:06:13.928 04:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71557 ']' 00:06:13.928 04:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.928 04:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.928 04:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.928 04:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.928 04:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:13.928 04:55:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:14.212 [2024-11-28 04:55:43.247519] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:14.213 [2024-11-28 04:55:43.247628] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71557 ] 00:06:14.213 [2024-11-28 04:55:43.392982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.213 [2024-11-28 04:55:43.412989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.213 [2024-11-28 04:55:43.413328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.213 [2024-11-28 04:55:43.413400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.148 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.148 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:15.148 04:55:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:15.148 I/O targets: 00:06:15.148 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:15.148 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:15.148 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:15.148 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:15.148 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:15.148 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:15.148 00:06:15.148 00:06:15.148 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.148 http://cunit.sourceforge.net/ 00:06:15.148 00:06:15.148 00:06:15.148 Suite: bdevio tests on: Nvme3n1 00:06:15.148 Test: blockdev write read block ...passed 00:06:15.148 Test: blockdev write zeroes read block ...passed 00:06:15.148 Test: blockdev write zeroes read no split ...passed 00:06:15.148 Test: blockdev write zeroes read split ...passed 00:06:15.148 Test: blockdev write zeroes read split partial ...passed 00:06:15.148 Test: blockdev reset ...[2024-11-28 04:55:44.183481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:15.148 [2024-11-28 04:55:44.185475] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:15.148 Test: blockdev write read 8 blocks ...passed 00:06:15.148 Test: blockdev write read size > 128k ...uccessful. 00:06:15.148 passed 00:06:15.148 Test: blockdev write read invalid size ...passed 00:06:15.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.148 Test: blockdev write read max offset ...passed 00:06:15.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.148 Test: blockdev writev readv 8 blocks ...passed 00:06:15.148 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.148 Test: blockdev writev readv block ...passed 00:06:15.148 Test: blockdev writev readv size > 128k ...passed 00:06:15.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.148 Test: blockdev comparev and writev ...[2024-11-28 04:55:44.190226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be206000 len:0x1000 00:06:15.148 [2024-11-28 04:55:44.190272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.148 passed 00:06:15.148 Test: blockdev nvme passthru rw ...passed 00:06:15.148 Test: blockdev nvme passthru vendor specific ...passed 00:06:15.148 Test: blockdev nvme admin passthru ...[2024-11-28 04:55:44.190723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:15.148 [2024-11-28 04:55:44.190751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.148 passed 00:06:15.148 Test: blockdev copy ...passed 00:06:15.148 Suite: bdevio tests on: Nvme2n3 00:06:15.148 Test: blockdev write read block ...passed 00:06:15.148 Test: blockdev write zeroes read block ...passed 00:06:15.148 Test: blockdev write zeroes read no split ...passed 00:06:15.148 Test: blockdev write zeroes read split ...passed 00:06:15.148 Test: blockdev write zeroes read split partial ...passed 00:06:15.148 Test: blockdev reset ...[2024-11-28 04:55:44.204763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:15.148 passed 00:06:15.148 Test: blockdev write read 8 blocks ...[2024-11-28 04:55:44.206626] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:15.148 passed 00:06:15.148 Test: blockdev write read size > 128k ...passed 00:06:15.148 Test: blockdev write read invalid size ...passed 00:06:15.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.148 Test: blockdev write read max offset ...passed 00:06:15.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.148 Test: blockdev writev readv 8 blocks ...passed 00:06:15.148 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.148 Test: blockdev writev readv block ...passed 00:06:15.148 Test: blockdev writev readv size > 128k ...passed 00:06:15.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.148 Test: blockdev comparev and writev ...[2024-11-28 04:55:44.210385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2baa02000 len:0x1000 00:06:15.148 [2024-11-28 04:55:44.210423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.148 passed 00:06:15.148 Test: blockdev nvme passthru rw ...passed 00:06:15.148 Test: blockdev nvme passthru vendor specific ...passed 00:06:15.148 Test: blockdev nvme admin passthru ...[2024-11-28 04:55:44.210844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:15.148 [2024-11-28 04:55:44.210866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.148 passed 00:06:15.148 Test: blockdev copy ...passed 00:06:15.148 Suite: bdevio tests on: Nvme2n2 00:06:15.148 Test: blockdev write read block ...passed 00:06:15.148 Test: blockdev write zeroes read block ...passed 00:06:15.148 Test: blockdev write zeroes read no split ...passed 00:06:15.148 Test: blockdev write zeroes read split ...passed 00:06:15.148 Test: blockdev write zeroes read split partial ...passed 00:06:15.148 Test: blockdev reset ...[2024-11-28 04:55:44.225487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:15.148 [2024-11-28 04:55:44.227219] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:15.148 Test: blockdev write read 8 blocks ...passed 00:06:15.148 Test: blockdev write read size > 128k ...uccessful. 00:06:15.148 passed 00:06:15.148 Test: blockdev write read invalid size ...passed 00:06:15.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.148 Test: blockdev write read max offset ...passed 00:06:15.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.148 Test: blockdev writev readv 8 blocks ...passed 00:06:15.148 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.148 Test: blockdev writev readv block ...passed 00:06:15.148 Test: blockdev writev readv size > 128k ...passed 00:06:15.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.148 Test: blockdev comparev and writev ...[2024-11-28 04:55:44.231498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1e3b000 len:0x1000 00:06:15.148 [2024-11-28 04:55:44.231534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.148 passed 00:06:15.148 Test: blockdev nvme passthru rw ...passed 00:06:15.148 Test: blockdev nvme passthru vendor specific ...passed 00:06:15.148 Test: blockdev nvme admin passthru ...[2024-11-28 04:55:44.231974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:15.148 [2024-11-28 04:55:44.231995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.148 passed 00:06:15.148 Test: blockdev copy ...passed 00:06:15.148 Suite: bdevio tests on: Nvme2n1 00:06:15.148 Test: blockdev write read block ...passed 00:06:15.148 Test: blockdev write zeroes read block ...passed 00:06:15.148 Test: blockdev write zeroes read no split ...passed 00:06:15.148 Test: blockdev write zeroes read split ...passed 00:06:15.148 Test: blockdev write zeroes read split partial ...passed 00:06:15.148 Test: blockdev reset ...[2024-11-28 04:55:44.246054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:15.148 passed 00:06:15.148 Test: blockdev write read 8 blocks ...[2024-11-28 04:55:44.247895] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:15.148 passed 00:06:15.148 Test: blockdev write read size > 128k ...passed 00:06:15.148 Test: blockdev write read invalid size ...passed 00:06:15.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.148 Test: blockdev write read max offset ...passed 00:06:15.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.148 Test: blockdev writev readv 8 blocks ...passed 00:06:15.148 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.148 Test: blockdev writev readv block ...passed 00:06:15.148 Test: blockdev writev readv size > 128k ...passed 00:06:15.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.148 Test: blockdev comparev and writev ...[2024-11-28 04:55:44.252087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1e37000 len:0x1000 00:06:15.148 [2024-11-28 04:55:44.252125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.148 passed 00:06:15.148 Test: blockdev nvme passthru rw ...passed 00:06:15.148 Test: blockdev nvme passthru vendor specific ...passed 00:06:15.148 Test: blockdev nvme admin passthru ...[2024-11-28 04:55:44.252539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:15.148 [2024-11-28 04:55:44.252566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.148 passed 00:06:15.148 Test: blockdev copy ...passed 00:06:15.148 Suite: bdevio tests on: Nvme1n1 00:06:15.148 Test: blockdev write read block ...passed 00:06:15.148 Test: blockdev write zeroes read block ...passed 00:06:15.148 Test: blockdev write zeroes read no split ...passed 00:06:15.148 Test: blockdev write zeroes read split ...passed 00:06:15.148 Test: blockdev write zeroes read split partial ...passed 00:06:15.148 Test: blockdev reset ...[2024-11-28 04:55:44.266305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:15.148 [2024-11-28 04:55:44.267825] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:15.149 passed 00:06:15.149 Test: blockdev write read 8 blocks ...passed 00:06:15.149 Test: blockdev write read size > 128k ...passed 00:06:15.149 Test: blockdev write read invalid size ...passed 00:06:15.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.149 Test: blockdev write read max offset ...passed 00:06:15.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.149 Test: blockdev writev readv 8 blocks ...passed 00:06:15.149 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.149 Test: blockdev writev readv block ...passed 00:06:15.149 Test: blockdev writev readv size > 128k ...passed 00:06:15.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.149 Test: blockdev comparev and writev ...[2024-11-28 04:55:44.271548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1e33000 len:0x1000 00:06:15.149 [2024-11-28 04:55:44.271583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.149 passed 00:06:15.149 Test: blockdev nvme passthru rw ...passed 00:06:15.149 Test: blockdev nvme passthru vendor specific ...passed 00:06:15.149 Test: blockdev nvme admin passthru ...[2024-11-28 04:55:44.272007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:15.149 [2024-11-28 04:55:44.272034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.149 passed 00:06:15.149 Test: blockdev copy ...passed 00:06:15.149 Suite: bdevio tests on: Nvme0n1 00:06:15.149 Test: blockdev write read block ...passed 00:06:15.149 Test: blockdev write zeroes read block ...passed 00:06:15.149 Test: blockdev write zeroes read no split ...passed 00:06:15.149 Test: blockdev write zeroes read split ...passed 00:06:15.149 Test: blockdev write zeroes read split partial ...passed 00:06:15.149 Test: blockdev reset ...[2024-11-28 04:55:44.287361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:15.149 passed 00:06:15.149 Test: blockdev write read 8 blocks ...[2024-11-28 04:55:44.288965] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:15.149 passed 00:06:15.149 Test: blockdev write read size > 128k ...passed 00:06:15.149 Test: blockdev write read invalid size ...passed 00:06:15.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.149 Test: blockdev write read max offset ...passed 00:06:15.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.149 Test: blockdev writev readv 8 blocks ...passed 00:06:15.149 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.149 Test: blockdev writev readv block ...passed 00:06:15.149 Test: blockdev writev readv size > 128k ...passed 00:06:15.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.149 Test: blockdev comparev and writev ...passed 00:06:15.149 Test: blockdev nvme passthru rw ...[2024-11-28 04:55:44.292394] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:15.149 separate metadata which is not supported yet. 00:06:15.149 passed 00:06:15.149 Test: blockdev nvme passthru vendor specific ...passed 00:06:15.149 Test: blockdev nvme admin passthru ...[2024-11-28 04:55:44.292732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:15.149 [2024-11-28 04:55:44.292768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:15.149 passed 00:06:15.149 Test: blockdev copy ...passed 00:06:15.149 00:06:15.149 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.149 suites 6 6 n/a 0 0 00:06:15.149 tests 138 138 138 0 0 00:06:15.149 asserts 893 893 893 0 n/a 00:06:15.149 00:06:15.149 Elapsed time = 0.287 seconds 00:06:15.149 0 00:06:15.149 04:55:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71557 00:06:15.149 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71557 ']' 00:06:15.149 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71557 00:06:15.149 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:15.149 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.149 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71557 00:06:15.149 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.149 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.149 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71557' 00:06:15.149 killing process with pid 71557 00:06:15.149 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71557 00:06:15.149 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71557 00:06:15.407 04:55:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:15.407 00:06:15.407 real 0m1.266s 00:06:15.407 user 0m3.284s 00:06:15.407 sys 0m0.228s 00:06:15.407 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.407 04:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:15.407 ************************************ 00:06:15.407 END TEST bdev_bounds 00:06:15.407 ************************************ 00:06:15.407 04:55:44 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:15.407 04:55:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:15.407 04:55:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.407 04:55:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:15.407 ************************************ 00:06:15.407 START TEST bdev_nbd 00:06:15.407 ************************************ 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71611 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71611 /var/tmp/spdk-nbd.sock 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71611 ']' 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:15.407 04:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:15.407 [2024-11-28 04:55:44.556877] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:15.407 [2024-11-28 04:55:44.556988] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.665 [2024-11-28 04:55:44.701449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.665 [2024-11-28 04:55:44.719565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.233 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.491 1+0 records in 00:06:16.491 1+0 records out 00:06:16.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348659 s, 11.7 MB/s 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.491 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.492 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.492 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.492 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.492 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.492 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.492 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.750 1+0 records in 00:06:16.750 1+0 records out 00:06:16.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331989 s, 12.3 MB/s 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.750 04:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.008 1+0 records in 00:06:17.008 1+0 records out 00:06:17.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544525 s, 7.5 MB/s 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.008 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:17.265 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:17.265 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:17.265 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.266 1+0 records in 00:06:17.266 1+0 records out 00:06:17.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386422 s, 10.6 MB/s 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.266 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:17.523 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:17.523 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:17.523 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:17.523 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:17.523 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.523 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.523 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.523 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:17.523 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.523 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.523 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.523 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.523 1+0 records in 00:06:17.524 1+0 records out 00:06:17.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448945 s, 9.1 MB/s 00:06:17.524 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.524 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.524 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.524 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.524 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.524 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.524 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.524 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.783 1+0 records in 00:06:17.783 1+0 records out 00:06:17.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395212 s, 10.4 MB/s 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.783 04:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.783 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:17.783 { 00:06:17.783 "nbd_device": "/dev/nbd0", 00:06:17.783 "bdev_name": "Nvme0n1" 00:06:17.783 }, 00:06:17.783 { 00:06:17.783 "nbd_device": "/dev/nbd1", 00:06:17.783 "bdev_name": "Nvme1n1" 00:06:17.783 }, 00:06:17.783 { 00:06:17.783 "nbd_device": "/dev/nbd2", 00:06:17.783 "bdev_name": "Nvme2n1" 00:06:17.783 }, 00:06:17.783 { 00:06:17.783 "nbd_device": "/dev/nbd3", 00:06:17.783 "bdev_name": "Nvme2n2" 00:06:17.783 }, 00:06:17.783 { 00:06:17.783 "nbd_device": "/dev/nbd4", 00:06:17.783 "bdev_name": "Nvme2n3" 00:06:17.783 }, 00:06:17.783 { 00:06:17.783 "nbd_device": "/dev/nbd5", 00:06:17.783 "bdev_name": "Nvme3n1" 00:06:17.783 } 00:06:17.783 ]' 00:06:17.783 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:17.783 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:17.783 { 00:06:17.783 "nbd_device": "/dev/nbd0", 00:06:17.783 "bdev_name": "Nvme0n1" 00:06:17.783 }, 00:06:17.783 { 00:06:17.783 "nbd_device": "/dev/nbd1", 00:06:17.783 "bdev_name": "Nvme1n1" 00:06:17.783 }, 00:06:17.783 { 00:06:17.783 "nbd_device": "/dev/nbd2", 00:06:17.783 "bdev_name": "Nvme2n1" 00:06:17.783 }, 00:06:17.783 { 00:06:17.783 "nbd_device": "/dev/nbd3", 00:06:17.783 "bdev_name": "Nvme2n2" 00:06:17.783 }, 00:06:17.783 { 00:06:17.783 "nbd_device": "/dev/nbd4", 00:06:17.783 "bdev_name": "Nvme2n3" 00:06:17.783 }, 00:06:17.783 { 00:06:17.783 "nbd_device": "/dev/nbd5", 00:06:17.783 "bdev_name": "Nvme3n1" 00:06:17.783 } 00:06:17.783 ]' 00:06:17.783 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:17.783 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:17.783 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.783 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:17.783 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.783 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:17.783 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.783 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.042 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.042 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.042 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.042 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.042 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.042 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.042 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.042 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.042 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.042 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.300 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.300 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.300 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.300 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.300 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.300 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.300 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.300 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.300 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.300 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:18.558 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:18.558 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:18.558 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:18.558 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.558 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.558 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:18.558 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.558 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.558 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.558 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:18.817 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:18.817 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:18.817 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:18.817 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.817 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.817 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:18.817 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.817 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.817 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.817 04:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.076 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.335 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:19.593 /dev/nbd0 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.593 1+0 records in 00:06:19.593 1+0 records out 00:06:19.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337696 s, 12.1 MB/s 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.593 04:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:19.854 /dev/nbd1 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.854 1+0 records in 00:06:19.854 1+0 records out 00:06:19.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000980213 s, 4.2 MB/s 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.854 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:20.112 /dev/nbd10 00:06:20.112 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:20.112 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:20.112 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:20.112 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.112 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.112 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.112 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:20.112 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.112 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.112 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.112 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.112 1+0 records in 00:06:20.113 1+0 records out 00:06:20.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645229 s, 6.3 MB/s 00:06:20.113 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.113 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.113 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.113 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.113 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.113 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.113 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.113 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:20.371 /dev/nbd11 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.371 1+0 records in 00:06:20.371 1+0 records out 00:06:20.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277677 s, 14.8 MB/s 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.371 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:20.629 /dev/nbd12 00:06:20.629 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:20.629 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:20.629 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:20.629 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.629 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.629 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.629 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:20.629 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.629 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.629 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.629 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.629 1+0 records in 00:06:20.630 1+0 records out 00:06:20.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558443 s, 7.3 MB/s 00:06:20.630 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.630 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.630 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.630 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.630 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.630 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.630 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.630 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:20.888 /dev/nbd13 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:20.888 1+0 records in 00:06:20.888 1+0 records out 00:06:20.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560002 s, 7.3 MB/s 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.888 04:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.888 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.888 { 00:06:20.888 "nbd_device": "/dev/nbd0", 00:06:20.888 "bdev_name": "Nvme0n1" 00:06:20.888 }, 00:06:20.888 { 00:06:20.888 "nbd_device": "/dev/nbd1", 00:06:20.888 "bdev_name": "Nvme1n1" 00:06:20.888 }, 00:06:20.888 { 00:06:20.888 "nbd_device": "/dev/nbd10", 00:06:20.888 "bdev_name": "Nvme2n1" 00:06:20.888 }, 00:06:20.888 { 00:06:20.888 "nbd_device": "/dev/nbd11", 00:06:20.888 "bdev_name": "Nvme2n2" 00:06:20.888 }, 00:06:20.888 { 00:06:20.888 "nbd_device": "/dev/nbd12", 00:06:20.888 "bdev_name": "Nvme2n3" 00:06:20.888 }, 00:06:20.888 { 00:06:20.888 "nbd_device": "/dev/nbd13", 00:06:20.888 "bdev_name": "Nvme3n1" 00:06:20.888 } 00:06:20.888 ]' 00:06:20.888 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.888 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.888 { 00:06:20.888 "nbd_device": "/dev/nbd0", 00:06:20.888 "bdev_name": "Nvme0n1" 00:06:20.888 }, 00:06:20.888 { 00:06:20.888 "nbd_device": "/dev/nbd1", 00:06:20.888 "bdev_name": "Nvme1n1" 00:06:20.888 }, 00:06:20.888 { 00:06:20.888 "nbd_device": "/dev/nbd10", 00:06:20.888 "bdev_name": "Nvme2n1" 00:06:20.888 }, 00:06:20.888 { 00:06:20.888 "nbd_device": "/dev/nbd11", 00:06:20.888 "bdev_name": "Nvme2n2" 00:06:20.888 }, 00:06:20.888 { 00:06:20.888 "nbd_device": "/dev/nbd12", 00:06:20.888 "bdev_name": "Nvme2n3" 00:06:20.888 }, 00:06:20.888 { 00:06:20.888 "nbd_device": "/dev/nbd13", 00:06:20.888 "bdev_name": "Nvme3n1" 00:06:20.888 } 00:06:20.888 ]' 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.147 /dev/nbd1 00:06:21.147 /dev/nbd10 00:06:21.147 /dev/nbd11 00:06:21.147 /dev/nbd12 00:06:21.147 /dev/nbd13' 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.147 /dev/nbd1 00:06:21.147 /dev/nbd10 00:06:21.147 /dev/nbd11 00:06:21.147 /dev/nbd12 00:06:21.147 /dev/nbd13' 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:21.147 256+0 records in 00:06:21.147 256+0 records out 00:06:21.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102019 s, 103 MB/s 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.147 256+0 records in 00:06:21.147 256+0 records out 00:06:21.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0609058 s, 17.2 MB/s 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.147 256+0 records in 00:06:21.147 256+0 records out 00:06:21.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104297 s, 10.1 MB/s 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.147 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:21.414 256+0 records in 00:06:21.414 256+0 records out 00:06:21.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0607769 s, 17.3 MB/s 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:21.414 256+0 records in 00:06:21.414 256+0 records out 00:06:21.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0619907 s, 16.9 MB/s 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:21.414 256+0 records in 00:06:21.414 256+0 records out 00:06:21.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0606449 s, 17.3 MB/s 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:21.414 256+0 records in 00:06:21.414 256+0 records out 00:06:21.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.070573 s, 14.9 MB/s 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.414 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.686 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.686 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.686 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.686 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.686 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.686 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.686 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.686 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.686 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.686 04:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.944 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.944 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.944 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.944 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.944 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.944 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.944 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.944 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.944 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.944 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:22.202 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:22.202 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:22.202 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:22.202 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.202 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.202 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:22.202 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.202 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.202 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.202 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:22.460 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.461 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:22.719 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:22.719 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:22.719 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:22.720 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.720 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.720 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:22.720 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.720 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.720 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.720 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.720 04:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:22.978 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:23.236 malloc_lvol_verify 00:06:23.236 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:23.495 cc6e8f69-b424-4971-a0af-29294b1f5cfd 00:06:23.495 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:23.754 eafccd93-1c25-46f5-a65d-3da666847b6d 00:06:23.754 04:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:23.754 /dev/nbd0 00:06:23.754 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:23.754 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:23.754 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:23.754 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:23.754 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:23.754 mke2fs 1.47.0 (5-Feb-2023) 00:06:23.754 Discarding device blocks: 0/4096 done 00:06:23.754 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:23.754 00:06:23.754 Allocating group tables: 0/1 done 00:06:23.754 Writing inode tables: 0/1 done 00:06:23.754 Creating journal (1024 blocks): done 00:06:23.754 Writing superblocks and filesystem accounting information: 0/1 done 00:06:23.754 00:06:23.754 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:23.754 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.754 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:23.754 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.754 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:23.754 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.754 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71611 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71611 ']' 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71611 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71611 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.012 killing process with pid 71611 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71611' 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71611 00:06:24.012 04:55:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71611 00:06:24.272 04:55:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:24.272 00:06:24.272 real 0m8.930s 00:06:24.272 user 0m13.245s 00:06:24.272 sys 0m2.903s 00:06:24.272 04:55:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.272 ************************************ 00:06:24.272 END TEST bdev_nbd 00:06:24.272 ************************************ 00:06:24.272 04:55:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:24.272 04:55:53 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:24.272 04:55:53 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:24.272 skipping fio tests on NVMe due to multi-ns failures. 00:06:24.272 04:55:53 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:24.272 04:55:53 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:24.272 04:55:53 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:24.272 04:55:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:24.272 04:55:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.272 04:55:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.272 ************************************ 00:06:24.272 START TEST bdev_verify 00:06:24.272 ************************************ 00:06:24.272 04:55:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:24.272 [2024-11-28 04:55:53.529839] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:24.272 [2024-11-28 04:55:53.529929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71973 ] 00:06:24.531 [2024-11-28 04:55:53.667732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.531 [2024-11-28 04:55:53.684687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.531 [2024-11-28 04:55:53.684714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.789 Running I/O for 5 seconds... 00:06:27.098 25856.00 IOPS, 101.00 MiB/s [2024-11-28T04:55:57.315Z] 25728.00 IOPS, 100.50 MiB/s [2024-11-28T04:55:58.248Z] 25856.00 IOPS, 101.00 MiB/s [2024-11-28T04:55:59.182Z] 25920.00 IOPS, 101.25 MiB/s [2024-11-28T04:55:59.443Z] 25792.00 IOPS, 100.75 MiB/s 00:06:30.159 Latency(us) 00:06:30.159 [2024-11-28T04:55:59.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:30.159 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:30.159 Verification LBA range: start 0x0 length 0xbd0bd 00:06:30.159 Nvme0n1 : 5.09 2290.41 8.95 0.00 0.00 55771.76 9779.99 62511.26 00:06:30.159 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:30.159 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:30.159 Nvme0n1 : 5.09 1961.24 7.66 0.00 0.00 64769.17 7713.08 61301.37 00:06:30.159 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:30.159 Verification LBA range: start 0x0 length 0xa0000 00:06:30.159 Nvme1n1 : 5.09 2289.75 8.94 0.00 0.00 55712.73 12401.43 58074.98 00:06:30.159 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:30.159 Verification LBA range: start 0xa0000 length 0xa0000 00:06:30.159 Nvme1n1 : 5.09 1960.26 7.66 0.00 0.00 64694.40 7108.14 61704.66 00:06:30.159 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:30.159 Verification LBA range: start 0x0 length 0x80000 00:06:30.159 Nvme2n1 : 5.09 2288.35 8.94 0.00 0.00 55654.21 13208.02 54848.59 00:06:30.159 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:30.159 Verification LBA range: start 0x80000 length 0x80000 00:06:30.159 Nvme2n1 : 5.10 1958.70 7.65 0.00 0.00 64621.18 7713.08 63317.86 00:06:30.159 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:30.159 Verification LBA range: start 0x0 length 0x80000 00:06:30.159 Nvme2n2 : 5.09 2287.26 8.93 0.00 0.00 55584.19 12905.55 55251.89 00:06:30.159 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:30.159 Verification LBA range: start 0x80000 length 0x80000 00:06:30.159 Nvme2n2 : 5.08 1954.65 7.64 0.00 0.00 65130.62 12048.54 60898.07 00:06:30.159 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:30.159 Verification LBA range: start 0x0 length 0x80000 00:06:30.159 Nvme2n3 : 5.10 2286.09 8.93 0.00 0.00 55509.93 11695.66 54445.29 00:06:30.159 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:30.159 Verification LBA range: start 0x80000 length 0x80000 00:06:30.159 Nvme2n3 : 5.08 1953.64 7.63 0.00 0.00 65027.60 14216.27 56461.78 00:06:30.159 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:30.159 Verification LBA range: start 0x0 length 0x20000 00:06:30.159 Nvme3n1 : 5.10 2285.51 8.93 0.00 0.00 55411.37 5898.24 54445.29 00:06:30.159 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:30.159 Verification LBA range: start 0x20000 length 0x20000 00:06:30.159 Nvme3n1 : 5.08 1952.94 7.63 0.00 0.00 64924.03 15022.87 59284.87 00:06:30.159 [2024-11-28T04:55:59.443Z] =================================================================================================================== 00:06:30.159 [2024-11-28T04:55:59.443Z] Total : 25468.81 99.49 0.00 0.00 59870.72 5898.24 63317.86 00:06:30.509 00:06:30.509 real 0m6.241s 00:06:30.509 user 0m11.901s 00:06:30.509 sys 0m0.152s 00:06:30.509 04:55:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.509 ************************************ 00:06:30.509 END TEST bdev_verify 00:06:30.509 ************************************ 00:06:30.509 04:55:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:30.509 04:55:59 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:30.509 04:55:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:30.509 04:55:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.509 04:55:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.509 ************************************ 00:06:30.509 START TEST bdev_verify_big_io 00:06:30.509 ************************************ 00:06:30.509 04:55:59 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:30.769 [2024-11-28 04:55:59.829864] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:30.769 [2024-11-28 04:55:59.829970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72060 ] 00:06:30.769 [2024-11-28 04:55:59.975938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.769 [2024-11-28 04:55:59.996386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.769 [2024-11-28 04:55:59.996425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.340 Running I/O for 5 seconds... 00:06:36.028 469.00 IOPS, 29.31 MiB/s [2024-11-28T04:56:06.691Z] 2061.50 IOPS, 128.84 MiB/s [2024-11-28T04:56:06.691Z] 2756.00 IOPS, 172.25 MiB/s 00:06:37.407 Latency(us) 00:06:37.407 [2024-11-28T04:56:06.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.407 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.407 Verification LBA range: start 0x0 length 0xbd0b 00:06:37.407 Nvme0n1 : 5.64 132.33 8.27 0.00 0.00 924904.07 28432.54 1077613.49 00:06:37.407 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.407 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:37.407 Nvme0n1 : 5.73 125.58 7.85 0.00 0.00 976723.13 25508.63 1142141.24 00:06:37.408 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.408 Verification LBA range: start 0x0 length 0xa000 00:06:37.408 Nvme1n1 : 5.64 136.19 8.51 0.00 0.00 880426.01 79853.10 929199.66 00:06:37.408 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.408 Verification LBA range: start 0xa000 length 0xa000 00:06:37.408 Nvme1n1 : 5.80 119.18 7.45 0.00 0.00 988321.69 98808.12 1716438.25 00:06:37.408 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.408 Verification LBA range: start 0x0 length 0x8000 00:06:37.408 Nvme2n1 : 5.75 138.35 8.65 0.00 0.00 835728.77 103244.41 955010.76 00:06:37.408 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.408 Verification LBA range: start 0x8000 length 0x8000 00:06:37.408 Nvme2n1 : 5.80 122.76 7.67 0.00 0.00 936948.42 61301.37 1742249.35 00:06:37.408 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.408 Verification LBA range: start 0x0 length 0x8000 00:06:37.408 Nvme2n2 : 5.86 148.88 9.30 0.00 0.00 757536.56 11998.13 974369.08 00:06:37.408 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.408 Verification LBA range: start 0x8000 length 0x8000 00:06:37.408 Nvme2n2 : 5.95 127.15 7.95 0.00 0.00 866256.82 65737.65 1793871.56 00:06:37.408 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.408 Verification LBA range: start 0x0 length 0x8000 00:06:37.408 Nvme2n3 : 5.86 152.93 9.56 0.00 0.00 716652.25 37506.76 993727.41 00:06:37.408 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.408 Verification LBA range: start 0x8000 length 0x8000 00:06:37.408 Nvme2n3 : 5.97 141.51 8.84 0.00 0.00 764040.73 7662.67 1832588.21 00:06:37.408 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:37.408 Verification LBA range: start 0x0 length 0x2000 00:06:37.408 Nvme3n1 : 5.95 175.94 11.00 0.00 0.00 605841.22 1569.08 1000180.18 00:06:37.408 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:37.408 Verification LBA range: start 0x2000 length 0x2000 00:06:37.408 Nvme3n1 : 6.04 184.69 11.54 0.00 0.00 568229.97 759.34 1051802.39 00:06:37.408 [2024-11-28T04:56:06.692Z] =================================================================================================================== 00:06:37.408 [2024-11-28T04:56:06.692Z] Total : 1705.51 106.59 0.00 0.00 798442.20 759.34 1832588.21 00:06:37.976 00:06:37.976 real 0m7.344s 00:06:37.976 user 0m13.989s 00:06:37.976 sys 0m0.209s 00:06:37.976 04:56:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.976 ************************************ 00:06:37.976 END TEST bdev_verify_big_io 00:06:37.976 ************************************ 00:06:37.976 04:56:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:37.976 04:56:07 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.976 04:56:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:37.976 04:56:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.976 04:56:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.976 ************************************ 00:06:37.976 START TEST bdev_write_zeroes 00:06:37.976 ************************************ 00:06:37.976 04:56:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.976 [2024-11-28 04:56:07.243852] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:37.976 [2024-11-28 04:56:07.243969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72163 ] 00:06:38.236 [2024-11-28 04:56:07.392341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.236 [2024-11-28 04:56:07.413080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.808 Running I/O for 1 seconds... 00:06:39.743 58368.00 IOPS, 228.00 MiB/s 00:06:39.743 Latency(us) 00:06:39.743 [2024-11-28T04:56:09.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.743 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.743 Nvme0n1 : 1.02 9745.83 38.07 0.00 0.00 13107.94 5091.64 27625.94 00:06:39.743 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.743 Nvme1n1 : 1.02 9737.05 38.04 0.00 0.00 13107.70 8822.15 25609.45 00:06:39.743 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.743 Nvme2n1 : 1.02 9728.32 38.00 0.00 0.00 13040.36 8771.74 26214.40 00:06:39.743 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.743 Nvme2n2 : 1.02 9719.99 37.97 0.00 0.00 13018.71 8822.15 27021.00 00:06:39.743 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.743 Nvme2n3 : 1.02 9711.64 37.94 0.00 0.00 13004.76 7259.37 27222.65 00:06:39.743 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.743 Nvme3n1 : 1.02 9703.28 37.90 0.00 0.00 12992.00 7007.31 27021.00 00:06:39.743 [2024-11-28T04:56:09.027Z] =================================================================================================================== 00:06:39.743 [2024-11-28T04:56:09.027Z] Total : 58346.11 227.91 0.00 0.00 13045.25 5091.64 27625.94 00:06:39.744 00:06:39.744 real 0m1.792s 00:06:39.744 user 0m1.519s 00:06:39.744 sys 0m0.162s 00:06:39.744 04:56:08 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.744 04:56:08 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:39.744 ************************************ 00:06:39.744 END TEST bdev_write_zeroes 00:06:39.744 ************************************ 00:06:39.744 04:56:09 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.744 04:56:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:39.744 04:56:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.744 04:56:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:39.744 ************************************ 00:06:39.744 START TEST bdev_json_nonenclosed 00:06:39.744 ************************************ 00:06:39.744 04:56:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.023 [2024-11-28 04:56:09.080098] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:40.023 [2024-11-28 04:56:09.080216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72200 ] 00:06:40.023 [2024-11-28 04:56:09.219445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.023 [2024-11-28 04:56:09.236881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.023 [2024-11-28 04:56:09.236953] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:40.023 [2024-11-28 04:56:09.236965] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:40.023 [2024-11-28 04:56:09.236979] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.023 00:06:40.023 real 0m0.270s 00:06:40.023 user 0m0.095s 00:06:40.023 sys 0m0.072s 00:06:40.023 04:56:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.023 04:56:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:40.023 ************************************ 00:06:40.023 END TEST bdev_json_nonenclosed 00:06:40.023 ************************************ 00:06:40.281 04:56:09 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.281 04:56:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:40.281 04:56:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.281 04:56:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.281 ************************************ 00:06:40.281 START TEST bdev_json_nonarray 00:06:40.281 ************************************ 00:06:40.281 04:56:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.281 [2024-11-28 04:56:09.390982] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:40.281 [2024-11-28 04:56:09.391073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72225 ] 00:06:40.281 [2024-11-28 04:56:09.525110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.281 [2024-11-28 04:56:09.542219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.281 [2024-11-28 04:56:09.542282] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:40.281 [2024-11-28 04:56:09.542296] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:40.281 [2024-11-28 04:56:09.542309] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.542 00:06:40.542 real 0m0.257s 00:06:40.542 user 0m0.087s 00:06:40.542 sys 0m0.068s 00:06:40.542 04:56:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.542 04:56:09 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:40.542 ************************************ 00:06:40.542 END TEST bdev_json_nonarray 00:06:40.542 ************************************ 00:06:40.542 04:56:09 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:40.542 04:56:09 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:40.542 04:56:09 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:40.542 04:56:09 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:40.542 04:56:09 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:40.542 04:56:09 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:40.542 04:56:09 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:40.542 04:56:09 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:40.542 04:56:09 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:40.542 04:56:09 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:40.542 04:56:09 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:40.542 00:06:40.542 real 0m29.061s 00:06:40.542 user 0m46.611s 00:06:40.542 sys 0m4.573s 00:06:40.542 04:56:09 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.542 ************************************ 00:06:40.542 END TEST blockdev_nvme 00:06:40.542 ************************************ 00:06:40.542 04:56:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.542 04:56:09 -- spdk/autotest.sh@209 -- # uname -s 00:06:40.542 04:56:09 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:40.542 04:56:09 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:40.542 04:56:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.542 04:56:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.542 04:56:09 -- common/autotest_common.sh@10 -- # set +x 00:06:40.542 ************************************ 00:06:40.542 START TEST blockdev_nvme_gpt 00:06:40.542 ************************************ 00:06:40.542 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:40.542 * Looking for test storage... 00:06:40.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:40.542 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.542 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.542 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.542 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.542 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:40.801 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:40.801 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.801 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:40.801 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.801 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:40.801 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:40.801 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.801 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:40.801 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.801 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.801 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.801 04:56:09 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:40.801 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.801 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.801 --rc genhtml_branch_coverage=1 00:06:40.801 --rc genhtml_function_coverage=1 00:06:40.801 --rc genhtml_legend=1 00:06:40.801 --rc geninfo_all_blocks=1 00:06:40.801 --rc geninfo_unexecuted_blocks=1 00:06:40.801 00:06:40.801 ' 00:06:40.801 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.801 --rc genhtml_branch_coverage=1 00:06:40.801 --rc genhtml_function_coverage=1 00:06:40.801 --rc genhtml_legend=1 00:06:40.801 --rc geninfo_all_blocks=1 00:06:40.801 --rc geninfo_unexecuted_blocks=1 00:06:40.801 00:06:40.801 ' 00:06:40.801 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.801 --rc genhtml_branch_coverage=1 00:06:40.801 --rc genhtml_function_coverage=1 00:06:40.801 --rc genhtml_legend=1 00:06:40.801 --rc geninfo_all_blocks=1 00:06:40.801 --rc geninfo_unexecuted_blocks=1 00:06:40.801 00:06:40.801 ' 00:06:40.801 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.801 --rc genhtml_branch_coverage=1 00:06:40.801 --rc genhtml_function_coverage=1 00:06:40.801 --rc genhtml_legend=1 00:06:40.801 --rc geninfo_all_blocks=1 00:06:40.801 --rc geninfo_unexecuted_blocks=1 00:06:40.801 00:06:40.801 ' 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72300 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72300 00:06:40.802 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72300 ']' 00:06:40.802 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.802 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.802 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.802 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.802 04:56:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:40.802 04:56:09 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:40.802 [2024-11-28 04:56:09.914162] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:40.802 [2024-11-28 04:56:09.914303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72300 ] 00:06:40.802 [2024-11-28 04:56:10.049805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.802 [2024-11-28 04:56:10.068875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.735 04:56:10 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.735 04:56:10 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:41.735 04:56:10 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:41.735 04:56:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:41.735 04:56:10 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:41.735 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:41.993 Waiting for block devices as requested 00:06:41.993 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:41.993 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.252 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.252 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:47.542 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:47.542 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:47.542 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:47.542 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:47.542 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:47.542 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.542 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:47.542 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:47.542 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:47.542 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:47.543 BYT; 00:06:47.543 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:47.543 BYT; 00:06:47.543 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:47.543 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:47.802 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:47.802 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:47.802 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:47.802 04:56:16 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:47.802 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:47.802 04:56:16 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:48.735 The operation has completed successfully. 00:06:48.735 04:56:17 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:50.142 The operation has completed successfully. 00:06:50.142 04:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:50.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:50.975 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.975 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.975 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.975 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.975 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:50.975 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.975 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.975 [] 00:06:50.975 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.975 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:50.975 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:50.975 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:50.976 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:50.976 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:50.976 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.976 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.236 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.236 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:51.236 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.236 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.236 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.236 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:51.236 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:51.236 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.236 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.236 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.236 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:51.236 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.236 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.236 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.236 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:51.236 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.236 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.498 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.498 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:51.498 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:51.498 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.498 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.498 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:51.498 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.498 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:51.498 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:51.499 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0c982da5-5c34-4cea-bf5d-afae4015e14b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0c982da5-5c34-4cea-bf5d-afae4015e14b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "bbd4848c-2e93-4a9c-a13f-e0f5e6196a23"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bbd4848c-2e93-4a9c-a13f-e0f5e6196a23",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "886cbfea-6aa2-407b-b175-8f6f01c6438c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "886cbfea-6aa2-407b-b175-8f6f01c6438c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9613b114-b92c-477e-8283-14cd8a669ae6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9613b114-b92c-477e-8283-14cd8a669ae6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "fe2f92f0-0ca2-407a-b414-1031f0607a2c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "fe2f92f0-0ca2-407a-b414-1031f0607a2c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:51.499 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:51.499 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:51.499 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:51.499 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 72300 00:06:51.499 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72300 ']' 00:06:51.499 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72300 00:06:51.499 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:51.499 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.499 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72300 00:06:51.499 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.499 killing process with pid 72300 00:06:51.499 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.499 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72300' 00:06:51.499 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72300 00:06:51.499 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72300 00:06:51.761 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:51.761 04:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:51.761 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:51.761 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.761 04:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.761 ************************************ 00:06:51.761 START TEST bdev_hello_world 00:06:51.761 ************************************ 00:06:51.761 04:56:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:51.761 [2024-11-28 04:56:20.975481] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:51.761 [2024-11-28 04:56:20.975607] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72916 ] 00:06:52.022 [2024-11-28 04:56:21.121796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.022 [2024-11-28 04:56:21.150348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.283 [2024-11-28 04:56:21.547807] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:52.283 [2024-11-28 04:56:21.547880] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:52.283 [2024-11-28 04:56:21.547906] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:52.283 [2024-11-28 04:56:21.550332] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:52.283 [2024-11-28 04:56:21.551380] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:52.283 [2024-11-28 04:56:21.551455] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:52.283 [2024-11-28 04:56:21.551908] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:52.283 00:06:52.283 [2024-11-28 04:56:21.551942] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:52.544 00:06:52.544 real 0m0.815s 00:06:52.544 user 0m0.535s 00:06:52.544 sys 0m0.175s 00:06:52.544 04:56:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.544 04:56:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:52.544 ************************************ 00:06:52.544 END TEST bdev_hello_world 00:06:52.544 ************************************ 00:06:52.544 04:56:21 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:52.544 04:56:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.544 04:56:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.544 04:56:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.544 ************************************ 00:06:52.544 START TEST bdev_bounds 00:06:52.544 ************************************ 00:06:52.544 04:56:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:52.544 04:56:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72947 00:06:52.544 04:56:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.544 Process bdevio pid: 72947 00:06:52.544 04:56:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72947' 00:06:52.544 04:56:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72947 00:06:52.544 04:56:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72947 ']' 00:06:52.544 04:56:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.545 04:56:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.545 04:56:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.545 04:56:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.545 04:56:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:52.545 04:56:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:52.804 [2024-11-28 04:56:21.861774] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:52.804 [2024-11-28 04:56:21.861922] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72947 ] 00:06:52.804 [2024-11-28 04:56:22.009012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.804 [2024-11-28 04:56:22.043847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.804 [2024-11-28 04:56:22.044214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.804 [2024-11-28 04:56:22.044231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.738 04:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.738 04:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:53.738 04:56:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:53.738 I/O targets: 00:06:53.738 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:53.738 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:53.738 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:53.738 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:53.738 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:53.738 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:53.738 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:53.738 00:06:53.738 00:06:53.738 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.738 http://cunit.sourceforge.net/ 00:06:53.738 00:06:53.738 00:06:53.738 Suite: bdevio tests on: Nvme3n1 00:06:53.738 Test: blockdev write read block ...passed 00:06:53.738 Test: blockdev write zeroes read block ...passed 00:06:53.738 Test: blockdev write zeroes read no split ...passed 00:06:53.738 Test: blockdev write zeroes read split ...passed 00:06:53.738 Test: blockdev write zeroes read split partial ...passed 00:06:53.738 Test: blockdev reset ...[2024-11-28 04:56:22.814240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:53.739 [2024-11-28 04:56:22.815824] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:53.739 passed 00:06:53.739 Test: blockdev write read 8 blocks ...passed 00:06:53.739 Test: blockdev write read size > 128k ...passed 00:06:53.739 Test: blockdev write read invalid size ...passed 00:06:53.739 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.739 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.739 Test: blockdev write read max offset ...passed 00:06:53.739 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.739 Test: blockdev writev readv 8 blocks ...passed 00:06:53.739 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.739 Test: blockdev writev readv block ...passed 00:06:53.739 Test: blockdev writev readv size > 128k ...passed 00:06:53.739 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.739 Test: blockdev comparev and writev ...[2024-11-28 04:56:22.820604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2e0e000 len:0x1000 00:06:53.739 [2024-11-28 04:56:22.820645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.739 passed 00:06:53.739 Test: blockdev nvme passthru rw ...passed 00:06:53.739 Test: blockdev nvme passthru vendor specific ...[2024-11-28 04:56:22.821196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.739 passed 00:06:53.739 Test: blockdev nvme admin passthru ...[2024-11-28 04:56:22.821218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.739 passed 00:06:53.739 Test: blockdev copy ...passed 00:06:53.739 Suite: bdevio tests on: Nvme2n3 00:06:53.739 Test: blockdev write read block ...passed 00:06:53.739 Test: blockdev write zeroes read block ...passed 00:06:53.739 Test: blockdev write zeroes read no split ...passed 00:06:53.739 Test: blockdev write zeroes read split ...passed 00:06:53.739 Test: blockdev write zeroes read split partial ...passed 00:06:53.739 Test: blockdev reset ...[2024-11-28 04:56:22.836137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:53.739 [2024-11-28 04:56:22.837812] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:53.739 passed 00:06:53.739 Test: blockdev write read 8 blocks ...passed 00:06:53.739 Test: blockdev write read size > 128k ...passed 00:06:53.739 Test: blockdev write read invalid size ...passed 00:06:53.739 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.739 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.739 Test: blockdev write read max offset ...passed 00:06:53.739 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.739 Test: blockdev writev readv 8 blocks ...passed 00:06:53.739 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.739 Test: blockdev writev readv block ...passed 00:06:53.739 Test: blockdev writev readv size > 128k ...passed 00:06:53.739 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.739 Test: blockdev comparev and writev ...[2024-11-28 04:56:22.842662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2e08000 len:0x1000 00:06:53.739 [2024-11-28 04:56:22.842699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.739 passed 00:06:53.739 Test: blockdev nvme passthru rw ...passed 00:06:53.739 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.739 Test: blockdev nvme admin passthru ...[2024-11-28 04:56:22.843340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.739 [2024-11-28 04:56:22.843361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.739 passed 00:06:53.739 Test: blockdev copy ...passed 00:06:53.739 Suite: bdevio tests on: Nvme2n2 00:06:53.739 Test: blockdev write read block ...passed 00:06:53.739 Test: blockdev write zeroes read block ...passed 00:06:53.739 Test: blockdev write zeroes read no split ...passed 00:06:53.739 Test: blockdev write zeroes read split ...passed 00:06:53.739 Test: blockdev write zeroes read split partial ...passed 00:06:53.739 Test: blockdev reset ...[2024-11-28 04:56:22.856615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:53.739 [2024-11-28 04:56:22.858193] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:53.739 passed 00:06:53.739 Test: blockdev write read 8 blocks ...passed 00:06:53.739 Test: blockdev write read size > 128k ...passed 00:06:53.739 Test: blockdev write read invalid size ...passed 00:06:53.739 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.739 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.739 Test: blockdev write read max offset ...passed 00:06:53.739 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.739 Test: blockdev writev readv 8 blocks ...passed 00:06:53.739 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.739 Test: blockdev writev readv block ...passed 00:06:53.739 Test: blockdev writev readv size > 128k ...passed 00:06:53.739 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.739 Test: blockdev comparev and writev ...[2024-11-28 04:56:22.862688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2e02000 len:0x1000 00:06:53.739 [2024-11-28 04:56:22.862721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.739 passed 00:06:53.739 Test: blockdev nvme passthru rw ...passed 00:06:53.739 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.739 Test: blockdev nvme admin passthru ...[2024-11-28 04:56:22.863247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.739 [2024-11-28 04:56:22.863261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.739 passed 00:06:53.739 Test: blockdev copy ...passed 00:06:53.739 Suite: bdevio tests on: Nvme2n1 00:06:53.739 Test: blockdev write read block ...passed 00:06:53.739 Test: blockdev write zeroes read block ...passed 00:06:53.739 Test: blockdev write zeroes read no split ...passed 00:06:53.739 Test: blockdev write zeroes read split ...passed 00:06:53.739 Test: blockdev write zeroes read split partial ...passed 00:06:53.739 Test: blockdev reset ...[2024-11-28 04:56:22.877761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:53.739 [2024-11-28 04:56:22.879360] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:53.739 passed 00:06:53.739 Test: blockdev write read 8 blocks ...passed 00:06:53.739 Test: blockdev write read size > 128k ...passed 00:06:53.739 Test: blockdev write read invalid size ...passed 00:06:53.739 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.739 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.739 Test: blockdev write read max offset ...passed 00:06:53.739 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.739 Test: blockdev writev readv 8 blocks ...passed 00:06:53.739 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.739 Test: blockdev writev readv block ...passed 00:06:53.739 Test: blockdev writev readv size > 128k ...passed 00:06:53.739 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.739 Test: blockdev comparev and writev ...[2024-11-28 04:56:22.886467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3204000 len:0x1000 00:06:53.739 [2024-11-28 04:56:22.886501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.739 passed 00:06:53.739 Test: blockdev nvme passthru rw ...passed 00:06:53.739 Test: blockdev nvme passthru vendor specific ...[2024-11-28 04:56:22.887288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.739 [2024-11-28 04:56:22.887308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.739 passed 00:06:53.739 Test: blockdev nvme admin passthru ...passed 00:06:53.739 Test: blockdev copy ...passed 00:06:53.739 Suite: bdevio tests on: Nvme1n1p2 00:06:53.739 Test: blockdev write read block ...passed 00:06:53.739 Test: blockdev write zeroes read block ...passed 00:06:53.739 Test: blockdev write zeroes read no split ...passed 00:06:53.739 Test: blockdev write zeroes read split ...passed 00:06:53.739 Test: blockdev write zeroes read split partial ...passed 00:06:53.739 Test: blockdev reset ...[2024-11-28 04:56:22.901808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:53.739 passed 00:06:53.739 Test: blockdev write read 8 blocks ...[2024-11-28 04:56:22.903199] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:53.739 passed 00:06:53.739 Test: blockdev write read size > 128k ...passed 00:06:53.739 Test: blockdev write read invalid size ...passed 00:06:53.739 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.739 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.739 Test: blockdev write read max offset ...passed 00:06:53.739 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.739 Test: blockdev writev readv 8 blocks ...passed 00:06:53.739 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.739 Test: blockdev writev readv block ...passed 00:06:53.739 Test: blockdev writev readv size > 128k ...passed 00:06:53.739 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.739 Test: blockdev comparev and writev ...[2024-11-28 04:56:22.907927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d023d000 len:0x1000 00:06:53.739 [2024-11-28 04:56:22.907959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.739 passed 00:06:53.739 Test: blockdev nvme passthru rw ...passed 00:06:53.739 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.739 Test: blockdev nvme admin passthru ...passed 00:06:53.739 Test: blockdev copy ...passed 00:06:53.739 Suite: bdevio tests on: Nvme1n1p1 00:06:53.739 Test: blockdev write read block ...passed 00:06:53.739 Test: blockdev write zeroes read block ...passed 00:06:53.739 Test: blockdev write zeroes read no split ...passed 00:06:53.739 Test: blockdev write zeroes read split ...passed 00:06:53.739 Test: blockdev write zeroes read split partial ...passed 00:06:53.740 Test: blockdev reset ...[2024-11-28 04:56:22.919294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:53.740 [2024-11-28 04:56:22.921990] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:53.740 passed 00:06:53.740 Test: blockdev write read 8 blocks ...passed 00:06:53.740 Test: blockdev write read size > 128k ...passed 00:06:53.740 Test: blockdev write read invalid size ...passed 00:06:53.740 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.740 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.740 Test: blockdev write read max offset ...passed 00:06:53.740 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.740 Test: blockdev writev readv 8 blocks ...passed 00:06:53.740 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.740 Test: blockdev writev readv block ...passed 00:06:53.740 Test: blockdev writev readv size > 128k ...passed 00:06:53.740 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.740 Test: blockdev comparev and writev ...[2024-11-28 04:56:22.934804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d0239000 len:0x1000 00:06:53.740 [2024-11-28 04:56:22.934837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.740 passed 00:06:53.740 Test: blockdev nvme passthru rw ...passed 00:06:53.740 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.740 Test: blockdev nvme admin passthru ...passed 00:06:53.740 Test: blockdev copy ...passed 00:06:53.740 Suite: bdevio tests on: Nvme0n1 00:06:53.740 Test: blockdev write read block ...passed 00:06:53.740 Test: blockdev write zeroes read block ...passed 00:06:53.740 Test: blockdev write zeroes read no split ...passed 00:06:53.740 Test: blockdev write zeroes read split ...passed 00:06:53.740 Test: blockdev write zeroes read split partial ...passed 00:06:53.740 Test: blockdev reset ...[2024-11-28 04:56:22.953189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:53.740 passed 00:06:53.740 Test: blockdev write read 8 blocks ...[2024-11-28 04:56:22.955869] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:53.740 passed 00:06:53.740 Test: blockdev write read size > 128k ...passed 00:06:53.740 Test: blockdev write read invalid size ...passed 00:06:53.740 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.740 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.740 Test: blockdev write read max offset ...passed 00:06:53.740 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.740 Test: blockdev writev readv 8 blocks ...passed 00:06:53.740 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.740 Test: blockdev writev readv block ...passed 00:06:53.740 Test: blockdev writev readv size > 128k ...passed 00:06:53.740 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.740 Test: blockdev comparev and writev ...passed 00:06:53.740 Test: blockdev nvme passthru rw ...[2024-11-28 04:56:22.967307] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:53.740 separate metadata which is not supported yet. 00:06:53.740 passed 00:06:53.740 Test: blockdev nvme passthru vendor specific ...[2024-11-28 04:56:22.968700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:53.740 [2024-11-28 04:56:22.968735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:53.740 passed 00:06:53.740 Test: blockdev nvme admin passthru ...passed 00:06:53.740 Test: blockdev copy ...passed 00:06:53.740 00:06:53.740 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.740 suites 7 7 n/a 0 0 00:06:53.740 tests 161 161 161 0 0 00:06:53.740 asserts 1025 1025 1025 0 n/a 00:06:53.740 00:06:53.740 Elapsed time = 0.406 seconds 00:06:53.740 0 00:06:53.740 04:56:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72947 00:06:53.740 04:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72947 ']' 00:06:53.740 04:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72947 00:06:53.740 04:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:53.740 04:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.740 04:56:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72947 00:06:53.740 04:56:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.740 04:56:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.740 04:56:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72947' 00:06:53.740 killing process with pid 72947 00:06:53.740 04:56:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72947 00:06:53.740 04:56:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72947 00:06:53.999 04:56:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:53.999 00:06:53.999 real 0m1.355s 00:06:53.999 user 0m3.450s 00:06:53.999 sys 0m0.260s 00:06:53.999 04:56:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.999 ************************************ 00:06:53.999 END TEST bdev_bounds 00:06:53.999 04:56:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:53.999 ************************************ 00:06:53.999 04:56:23 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:53.999 04:56:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:53.999 04:56:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.999 04:56:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.999 ************************************ 00:06:53.999 START TEST bdev_nbd 00:06:53.999 ************************************ 00:06:53.999 04:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:53.999 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:53.999 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:54.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72990 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72990 /var/tmp/spdk-nbd.sock 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72990 ']' 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:54.000 04:56:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:54.261 [2024-11-28 04:56:23.288305] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:54.261 [2024-11-28 04:56:23.288582] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.261 [2024-11-28 04:56:23.433615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.261 [2024-11-28 04:56:23.454446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.200 1+0 records in 00:06:55.200 1+0 records out 00:06:55.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101755 s, 4.0 MB/s 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.200 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:55.461 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:55.461 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:55.461 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:55.461 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:55.461 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.461 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.462 1+0 records in 00:06:55.462 1+0 records out 00:06:55.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119407 s, 3.4 MB/s 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.462 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.723 1+0 records in 00:06:55.723 1+0 records out 00:06:55.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000907741 s, 4.5 MB/s 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.723 04:56:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.984 1+0 records in 00:06:55.984 1+0 records out 00:06:55.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084054 s, 4.9 MB/s 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.984 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.245 1+0 records in 00:06:56.245 1+0 records out 00:06:56.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128812 s, 3.2 MB/s 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.245 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.506 1+0 records in 00:06:56.506 1+0 records out 00:06:56.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106156 s, 3.9 MB/s 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.506 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:56.767 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.768 1+0 records in 00:06:56.768 1+0 records out 00:06:56.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000669934 s, 6.1 MB/s 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.768 04:56:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.029 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd0", 00:06:57.029 "bdev_name": "Nvme0n1" 00:06:57.029 }, 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd1", 00:06:57.029 "bdev_name": "Nvme1n1p1" 00:06:57.029 }, 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd2", 00:06:57.029 "bdev_name": "Nvme1n1p2" 00:06:57.029 }, 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd3", 00:06:57.029 "bdev_name": "Nvme2n1" 00:06:57.029 }, 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd4", 00:06:57.029 "bdev_name": "Nvme2n2" 00:06:57.029 }, 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd5", 00:06:57.029 "bdev_name": "Nvme2n3" 00:06:57.029 }, 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd6", 00:06:57.029 "bdev_name": "Nvme3n1" 00:06:57.029 } 00:06:57.029 ]' 00:06:57.029 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:57.029 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd0", 00:06:57.029 "bdev_name": "Nvme0n1" 00:06:57.029 }, 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd1", 00:06:57.029 "bdev_name": "Nvme1n1p1" 00:06:57.029 }, 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd2", 00:06:57.029 "bdev_name": "Nvme1n1p2" 00:06:57.029 }, 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd3", 00:06:57.029 "bdev_name": "Nvme2n1" 00:06:57.029 }, 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd4", 00:06:57.029 "bdev_name": "Nvme2n2" 00:06:57.029 }, 00:06:57.029 { 00:06:57.029 "nbd_device": "/dev/nbd5", 00:06:57.030 "bdev_name": "Nvme2n3" 00:06:57.030 }, 00:06:57.030 { 00:06:57.030 "nbd_device": "/dev/nbd6", 00:06:57.030 "bdev_name": "Nvme3n1" 00:06:57.030 } 00:06:57.030 ]' 00:06:57.030 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:57.030 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:57.030 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.030 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:57.030 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.030 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:57.030 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.030 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.291 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.291 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.291 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.291 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.291 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.291 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.291 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.291 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.291 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.291 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.553 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.553 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.553 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.553 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.553 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.553 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.553 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.553 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.553 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.553 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:57.814 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:57.815 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:57.815 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:57.815 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.815 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.815 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:57.815 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.815 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.815 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.815 04:56:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:58.076 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:58.076 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:58.076 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:58.076 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.076 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.076 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:58.076 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.076 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.076 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.076 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.338 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.339 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:58.600 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:58.600 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:58.600 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:58.600 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.600 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.600 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:58.600 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.600 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.600 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.600 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.600 04:56:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.861 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:58.862 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.862 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.862 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:59.123 /dev/nbd0 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.123 1+0 records in 00:06:59.123 1+0 records out 00:06:59.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487695 s, 8.4 MB/s 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.123 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.124 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.124 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:59.385 /dev/nbd1 00:06:59.385 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.385 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.385 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:59.385 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.385 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.386 1+0 records in 00:06:59.386 1+0 records out 00:06:59.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665072 s, 6.2 MB/s 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.386 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:59.647 /dev/nbd10 00:06:59.647 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.648 1+0 records in 00:06:59.648 1+0 records out 00:06:59.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601277 s, 6.8 MB/s 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.648 04:56:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:59.909 /dev/nbd11 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.909 1+0 records in 00:06:59.909 1+0 records out 00:06:59.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557213 s, 7.4 MB/s 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.909 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.910 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.910 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.910 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.910 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.910 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:00.171 /dev/nbd12 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.171 1+0 records in 00:07:00.171 1+0 records out 00:07:00.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420917 s, 9.7 MB/s 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.171 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:00.433 /dev/nbd13 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.433 1+0 records in 00:07:00.433 1+0 records out 00:07:00.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790949 s, 5.2 MB/s 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.433 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:00.694 /dev/nbd14 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.694 1+0 records in 00:07:00.694 1+0 records out 00:07:00.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704914 s, 5.8 MB/s 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.694 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.695 04:56:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:00.695 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.695 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.695 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.695 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.695 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.956 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd0", 00:07:00.956 "bdev_name": "Nvme0n1" 00:07:00.956 }, 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd1", 00:07:00.956 "bdev_name": "Nvme1n1p1" 00:07:00.956 }, 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd10", 00:07:00.956 "bdev_name": "Nvme1n1p2" 00:07:00.956 }, 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd11", 00:07:00.956 "bdev_name": "Nvme2n1" 00:07:00.956 }, 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd12", 00:07:00.956 "bdev_name": "Nvme2n2" 00:07:00.956 }, 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd13", 00:07:00.956 "bdev_name": "Nvme2n3" 00:07:00.956 }, 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd14", 00:07:00.956 "bdev_name": "Nvme3n1" 00:07:00.956 } 00:07:00.956 ]' 00:07:00.956 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.956 04:56:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd0", 00:07:00.956 "bdev_name": "Nvme0n1" 00:07:00.956 }, 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd1", 00:07:00.956 "bdev_name": "Nvme1n1p1" 00:07:00.956 }, 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd10", 00:07:00.956 "bdev_name": "Nvme1n1p2" 00:07:00.956 }, 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd11", 00:07:00.956 "bdev_name": "Nvme2n1" 00:07:00.956 }, 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd12", 00:07:00.956 "bdev_name": "Nvme2n2" 00:07:00.956 }, 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd13", 00:07:00.956 "bdev_name": "Nvme2n3" 00:07:00.956 }, 00:07:00.956 { 00:07:00.956 "nbd_device": "/dev/nbd14", 00:07:00.956 "bdev_name": "Nvme3n1" 00:07:00.956 } 00:07:00.956 ]' 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.956 /dev/nbd1 00:07:00.956 /dev/nbd10 00:07:00.956 /dev/nbd11 00:07:00.956 /dev/nbd12 00:07:00.956 /dev/nbd13 00:07:00.956 /dev/nbd14' 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.956 /dev/nbd1 00:07:00.956 /dev/nbd10 00:07:00.956 /dev/nbd11 00:07:00.956 /dev/nbd12 00:07:00.956 /dev/nbd13 00:07:00.956 /dev/nbd14' 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:00.956 256+0 records in 00:07:00.956 256+0 records out 00:07:00.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00681524 s, 154 MB/s 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.956 256+0 records in 00:07:00.956 256+0 records out 00:07:00.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135173 s, 7.8 MB/s 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.956 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:01.218 256+0 records in 00:07:01.218 256+0 records out 00:07:01.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129406 s, 8.1 MB/s 00:07:01.218 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.218 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:01.218 256+0 records in 00:07:01.218 256+0 records out 00:07:01.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107875 s, 9.7 MB/s 00:07:01.218 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.218 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:01.480 256+0 records in 00:07:01.480 256+0 records out 00:07:01.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145187 s, 7.2 MB/s 00:07:01.480 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.480 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:01.480 256+0 records in 00:07:01.480 256+0 records out 00:07:01.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154863 s, 6.8 MB/s 00:07:01.480 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.480 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:01.741 256+0 records in 00:07:01.741 256+0 records out 00:07:01.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186758 s, 5.6 MB/s 00:07:01.741 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.741 04:56:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:02.001 256+0 records in 00:07:02.001 256+0 records out 00:07:02.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136058 s, 7.7 MB/s 00:07:02.001 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:02.001 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:02.001 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.002 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.263 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:02.524 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:02.524 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:02.524 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:02.524 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.524 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.524 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:02.524 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.524 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.524 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.524 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:02.785 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:02.785 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:02.785 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:02.785 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.785 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.785 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:02.785 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.785 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.785 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.785 04:56:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:03.046 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:03.046 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:03.046 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:03.046 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.046 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.046 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:03.046 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.046 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.046 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.046 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.306 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:03.566 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:03.826 malloc_lvol_verify 00:07:03.826 04:56:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:04.086 9ace0087-c010-4781-919d-9754428ff2fc 00:07:04.086 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:04.347 bc399d73-fae0-48b4-a53a-d888cc09a2f1 00:07:04.347 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:04.347 /dev/nbd0 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:04.607 mke2fs 1.47.0 (5-Feb-2023) 00:07:04.607 Discarding device blocks: 0/4096 done 00:07:04.607 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:04.607 00:07:04.607 Allocating group tables: 0/1 done 00:07:04.607 Writing inode tables: 0/1 done 00:07:04.607 Creating journal (1024 blocks): done 00:07:04.607 Writing superblocks and filesystem accounting information: 0/1 done 00:07:04.607 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72990 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72990 ']' 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72990 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.607 04:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72990 00:07:04.868 04:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.868 04:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.868 killing process with pid 72990 00:07:04.868 04:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72990' 00:07:04.868 04:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72990 00:07:04.868 04:56:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72990 00:07:04.868 04:56:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:04.868 00:07:04.868 real 0m10.857s 00:07:04.868 user 0m15.389s 00:07:04.868 sys 0m3.825s 00:07:04.868 04:56:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.868 ************************************ 00:07:04.868 END TEST bdev_nbd 00:07:04.868 ************************************ 00:07:04.868 04:56:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:04.868 04:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:04.868 04:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:04.868 skipping fio tests on NVMe due to multi-ns failures. 00:07:04.868 04:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:04.868 04:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:04.868 04:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:04.868 04:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:04.868 04:56:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:04.868 04:56:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.868 04:56:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:04.868 ************************************ 00:07:04.868 START TEST bdev_verify 00:07:04.868 ************************************ 00:07:04.868 04:56:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:05.129 [2024-11-28 04:56:34.190540] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:05.129 [2024-11-28 04:56:34.190662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73403 ] 00:07:05.129 [2024-11-28 04:56:34.336042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.129 [2024-11-28 04:56:34.357003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.129 [2024-11-28 04:56:34.357083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.699 Running I/O for 5 seconds... 00:07:08.143 17984.00 IOPS, 70.25 MiB/s [2024-11-28T04:56:37.999Z] 18912.00 IOPS, 73.88 MiB/s [2024-11-28T04:56:39.382Z] 19498.67 IOPS, 76.17 MiB/s [2024-11-28T04:56:39.953Z] 19904.00 IOPS, 77.75 MiB/s [2024-11-28T04:56:39.953Z] 20262.40 IOPS, 79.15 MiB/s 00:07:10.669 Latency(us) 00:07:10.669 [2024-11-28T04:56:39.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.669 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0x0 length 0xbd0bd 00:07:10.669 Nvme0n1 : 5.05 1444.64 5.64 0.00 0.00 88235.20 18148.43 84289.38 00:07:10.669 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:10.669 Nvme0n1 : 5.07 1400.85 5.47 0.00 0.00 90996.24 10889.06 119376.34 00:07:10.669 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0x0 length 0x4ff80 00:07:10.669 Nvme1n1p1 : 5.05 1444.18 5.64 0.00 0.00 88083.80 19257.50 75416.81 00:07:10.669 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:10.669 Nvme1n1p1 : 5.08 1398.81 5.46 0.00 0.00 90947.95 16535.24 117763.15 00:07:10.669 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0x0 length 0x4ff7f 00:07:10.669 Nvme1n1p2 : 5.07 1450.34 5.67 0.00 0.00 87579.21 6906.49 73400.32 00:07:10.669 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:10.669 Nvme1n1p2 : 5.08 1397.78 5.46 0.00 0.00 90836.64 19761.62 112116.97 00:07:10.669 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0x0 length 0x80000 00:07:10.669 Nvme2n1 : 5.08 1449.95 5.66 0.00 0.00 87440.48 7057.72 76223.41 00:07:10.669 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0x80000 length 0x80000 00:07:10.669 Nvme2n1 : 5.08 1397.41 5.46 0.00 0.00 90715.26 19055.85 104857.60 00:07:10.669 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0x0 length 0x80000 00:07:10.669 Nvme2n2 : 5.09 1459.37 5.70 0.00 0.00 86879.77 8469.27 77836.60 00:07:10.669 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0x80000 length 0x80000 00:07:10.669 Nvme2n2 : 5.09 1407.35 5.50 0.00 0.00 90203.06 6956.90 104051.00 00:07:10.669 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0x0 length 0x80000 00:07:10.669 Nvme2n3 : 5.09 1458.95 5.70 0.00 0.00 86758.36 8822.15 78643.20 00:07:10.669 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0x80000 length 0x80000 00:07:10.669 Nvme2n3 : 5.09 1406.98 5.50 0.00 0.00 90072.88 6856.07 111310.38 00:07:10.669 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:10.669 Verification LBA range: start 0x0 length 0x20000 00:07:10.669 Nvme3n1 : 5.09 1458.57 5.70 0.00 0.00 86641.13 9175.04 79853.10 00:07:10.669 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:10.670 Verification LBA range: start 0x20000 length 0x20000 00:07:10.670 Nvme3n1 : 5.10 1406.58 5.49 0.00 0.00 89917.21 6906.49 118569.75 00:07:10.670 [2024-11-28T04:56:39.954Z] =================================================================================================================== 00:07:10.670 [2024-11-28T04:56:39.954Z] Total : 19981.76 78.05 0.00 0.00 88921.64 6856.07 119376.34 00:07:11.613 00:07:11.613 real 0m6.688s 00:07:11.613 user 0m12.679s 00:07:11.613 sys 0m0.206s 00:07:11.613 ************************************ 00:07:11.613 END TEST bdev_verify 00:07:11.613 04:56:40 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.613 04:56:40 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:11.613 ************************************ 00:07:11.613 04:56:40 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:11.613 04:56:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:11.613 04:56:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.613 04:56:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.613 ************************************ 00:07:11.613 START TEST bdev_verify_big_io 00:07:11.613 ************************************ 00:07:11.613 04:56:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:11.874 [2024-11-28 04:56:40.934668] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:11.874 [2024-11-28 04:56:40.934789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73501 ] 00:07:11.874 [2024-11-28 04:56:41.081852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.874 [2024-11-28 04:56:41.102795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.874 [2024-11-28 04:56:41.102856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.445 Running I/O for 5 seconds... 00:07:18.290 144.00 IOPS, 9.00 MiB/s [2024-11-28T04:56:47.833Z] 2231.00 IOPS, 139.44 MiB/s [2024-11-28T04:56:47.833Z] 2848.33 IOPS, 178.02 MiB/s 00:07:18.549 Latency(us) 00:07:18.549 [2024-11-28T04:56:47.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.549 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x0 length 0xbd0b 00:07:18.549 Nvme0n1 : 6.00 89.36 5.59 0.00 0.00 1359187.86 24399.56 1355082.83 00:07:18.549 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:18.549 Nvme0n1 : 5.82 93.53 5.85 0.00 0.00 1306565.80 20064.10 1361535.61 00:07:18.549 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x0 length 0x4ff8 00:07:18.549 Nvme1n1p1 : 5.92 90.21 5.64 0.00 0.00 1293413.28 99211.42 1161499.57 00:07:18.549 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:18.549 Nvme1n1p1 : 5.94 97.04 6.07 0.00 0.00 1226805.21 112116.97 1167952.34 00:07:18.549 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x0 length 0x4ff7 00:07:18.549 Nvme1n1p2 : 6.00 95.97 6.00 0.00 0.00 1202375.55 79449.80 1142141.24 00:07:18.549 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:18.549 Nvme1n1p2 : 5.94 97.01 6.06 0.00 0.00 1184236.22 116149.96 1045349.61 00:07:18.549 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x0 length 0x8000 00:07:18.549 Nvme2n1 : 6.12 100.64 6.29 0.00 0.00 1112413.04 56461.78 1167952.34 00:07:18.549 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x8000 length 0x8000 00:07:18.549 Nvme2n1 : 6.03 98.97 6.19 0.00 0.00 1126830.09 91952.05 1193763.45 00:07:18.549 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x0 length 0x8000 00:07:18.549 Nvme2n2 : 6.12 104.53 6.53 0.00 0.00 1041745.76 59688.17 1206669.00 00:07:18.549 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x8000 length 0x8000 00:07:18.549 Nvme2n2 : 6.13 96.39 6.02 0.00 0.00 1118807.94 96791.63 1948738.17 00:07:18.549 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x0 length 0x8000 00:07:18.549 Nvme2n3 : 6.16 107.48 6.72 0.00 0.00 976584.18 34078.72 1238932.87 00:07:18.549 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x8000 length 0x8000 00:07:18.549 Nvme2n3 : 6.21 106.00 6.62 0.00 0.00 988453.27 32263.88 2026171.47 00:07:18.549 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x0 length 0x2000 00:07:18.549 Nvme3n1 : 6.20 123.81 7.74 0.00 0.00 823444.17 1688.81 1271196.75 00:07:18.549 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:18.549 Verification LBA range: start 0x2000 length 0x2000 00:07:18.549 Nvme3n1 : 6.21 120.88 7.55 0.00 0.00 839604.11 838.10 2064888.12 00:07:18.549 [2024-11-28T04:56:47.833Z] =================================================================================================================== 00:07:18.549 [2024-11-28T04:56:47.833Z] Total : 1421.82 88.86 0.00 0.00 1096469.90 838.10 2064888.12 00:07:19.486 00:07:19.486 real 0m7.699s 00:07:19.486 user 0m14.691s 00:07:19.486 sys 0m0.207s 00:07:19.486 04:56:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.486 ************************************ 00:07:19.486 END TEST bdev_verify_big_io 00:07:19.486 ************************************ 00:07:19.486 04:56:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:19.486 04:56:48 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.486 04:56:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:19.486 04:56:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.486 04:56:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.486 ************************************ 00:07:19.486 START TEST bdev_write_zeroes 00:07:19.486 ************************************ 00:07:19.487 04:56:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.487 [2024-11-28 04:56:48.688715] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:19.487 [2024-11-28 04:56:48.688824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73599 ] 00:07:19.747 [2024-11-28 04:56:48.835454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.747 [2024-11-28 04:56:48.854466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.006 Running I/O for 1 seconds... 00:07:21.387 56448.00 IOPS, 220.50 MiB/s 00:07:21.388 Latency(us) 00:07:21.388 [2024-11-28T04:56:50.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.388 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:21.388 Nvme0n1 : 1.02 8081.27 31.57 0.00 0.00 15804.58 7813.91 25609.45 00:07:21.388 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:21.388 Nvme1n1p1 : 1.02 8071.44 31.53 0.00 0.00 15800.65 11846.89 25004.50 00:07:21.388 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:21.388 Nvme1n1p2 : 1.02 8061.63 31.49 0.00 0.00 15765.15 11141.12 25508.63 00:07:21.388 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:21.388 Nvme2n1 : 1.03 8052.52 31.46 0.00 0.00 15711.63 8570.09 24702.03 00:07:21.388 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:21.388 Nvme2n2 : 1.03 8043.50 31.42 0.00 0.00 15703.61 8217.21 25105.33 00:07:21.388 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:21.388 Nvme2n3 : 1.03 8034.44 31.38 0.00 0.00 15696.01 7662.67 24702.03 00:07:21.388 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:21.388 Nvme3n1 : 1.03 8025.46 31.35 0.00 0.00 15689.05 7511.43 25811.10 00:07:21.388 [2024-11-28T04:56:50.672Z] =================================================================================================================== 00:07:21.388 [2024-11-28T04:56:50.672Z] Total : 56370.25 220.20 0.00 0.00 15738.67 7511.43 25811.10 00:07:21.388 00:07:21.388 real 0m1.816s 00:07:21.388 user 0m1.544s 00:07:21.388 sys 0m0.160s 00:07:21.388 04:56:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.388 04:56:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:21.388 ************************************ 00:07:21.388 END TEST bdev_write_zeroes 00:07:21.388 ************************************ 00:07:21.388 04:56:50 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.388 04:56:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:21.388 04:56:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.388 04:56:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.388 ************************************ 00:07:21.388 START TEST bdev_json_nonenclosed 00:07:21.388 ************************************ 00:07:21.388 04:56:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.388 [2024-11-28 04:56:50.541151] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:21.388 [2024-11-28 04:56:50.541280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73636 ] 00:07:21.649 [2024-11-28 04:56:50.685503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.649 [2024-11-28 04:56:50.704751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.649 [2024-11-28 04:56:50.704831] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:21.649 [2024-11-28 04:56:50.704851] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:21.649 [2024-11-28 04:56:50.704862] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.649 00:07:21.649 real 0m0.284s 00:07:21.649 user 0m0.110s 00:07:21.649 sys 0m0.071s 00:07:21.649 04:56:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.649 04:56:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:21.649 ************************************ 00:07:21.649 END TEST bdev_json_nonenclosed 00:07:21.649 ************************************ 00:07:21.649 04:56:50 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.649 04:56:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:21.649 04:56:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.649 04:56:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.649 ************************************ 00:07:21.649 START TEST bdev_json_nonarray 00:07:21.649 ************************************ 00:07:21.649 04:56:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.649 [2024-11-28 04:56:50.864565] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:21.649 [2024-11-28 04:56:50.864677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73661 ] 00:07:21.910 [2024-11-28 04:56:51.008522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.910 [2024-11-28 04:56:51.028057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.910 [2024-11-28 04:56:51.028143] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:21.910 [2024-11-28 04:56:51.028163] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:21.911 [2024-11-28 04:56:51.028187] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.911 00:07:21.911 real 0m0.285s 00:07:21.911 user 0m0.106s 00:07:21.911 sys 0m0.075s 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.911 ************************************ 00:07:21.911 END TEST bdev_json_nonarray 00:07:21.911 ************************************ 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:21.911 04:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:21.911 04:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:21.911 04:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:21.911 04:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.911 04:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.911 04:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.911 ************************************ 00:07:21.911 START TEST bdev_gpt_uuid 00:07:21.911 ************************************ 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73681 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 73681 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 73681 ']' 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.911 04:56:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:22.172 [2024-11-28 04:56:51.223889] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:22.172 [2024-11-28 04:56:51.224016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73681 ] 00:07:22.172 [2024-11-28 04:56:51.363234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.172 [2024-11-28 04:56:51.382858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.114 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.114 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:23.114 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:23.114 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.114 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:23.114 Some configs were skipped because the RPC state that can call them passed over. 00:07:23.114 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.114 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:23.114 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.114 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:23.375 { 00:07:23.375 "name": "Nvme1n1p1", 00:07:23.375 "aliases": [ 00:07:23.375 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:23.375 ], 00:07:23.375 "product_name": "GPT Disk", 00:07:23.375 "block_size": 4096, 00:07:23.375 "num_blocks": 655104, 00:07:23.375 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:23.375 "assigned_rate_limits": { 00:07:23.375 "rw_ios_per_sec": 0, 00:07:23.375 "rw_mbytes_per_sec": 0, 00:07:23.375 "r_mbytes_per_sec": 0, 00:07:23.375 "w_mbytes_per_sec": 0 00:07:23.375 }, 00:07:23.375 "claimed": false, 00:07:23.375 "zoned": false, 00:07:23.375 "supported_io_types": { 00:07:23.375 "read": true, 00:07:23.375 "write": true, 00:07:23.375 "unmap": true, 00:07:23.375 "flush": true, 00:07:23.375 "reset": true, 00:07:23.375 "nvme_admin": false, 00:07:23.375 "nvme_io": false, 00:07:23.375 "nvme_io_md": false, 00:07:23.375 "write_zeroes": true, 00:07:23.375 "zcopy": false, 00:07:23.375 "get_zone_info": false, 00:07:23.375 "zone_management": false, 00:07:23.375 "zone_append": false, 00:07:23.375 "compare": true, 00:07:23.375 "compare_and_write": false, 00:07:23.375 "abort": true, 00:07:23.375 "seek_hole": false, 00:07:23.375 "seek_data": false, 00:07:23.375 "copy": true, 00:07:23.375 "nvme_iov_md": false 00:07:23.375 }, 00:07:23.375 "driver_specific": { 00:07:23.375 "gpt": { 00:07:23.375 "base_bdev": "Nvme1n1", 00:07:23.375 "offset_blocks": 256, 00:07:23.375 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:23.375 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:23.375 "partition_name": "SPDK_TEST_first" 00:07:23.375 } 00:07:23.375 } 00:07:23.375 } 00:07:23.375 ]' 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.375 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:23.375 { 00:07:23.375 "name": "Nvme1n1p2", 00:07:23.375 "aliases": [ 00:07:23.375 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:23.375 ], 00:07:23.375 "product_name": "GPT Disk", 00:07:23.375 "block_size": 4096, 00:07:23.375 "num_blocks": 655103, 00:07:23.375 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:23.375 "assigned_rate_limits": { 00:07:23.375 "rw_ios_per_sec": 0, 00:07:23.375 "rw_mbytes_per_sec": 0, 00:07:23.376 "r_mbytes_per_sec": 0, 00:07:23.376 "w_mbytes_per_sec": 0 00:07:23.376 }, 00:07:23.376 "claimed": false, 00:07:23.376 "zoned": false, 00:07:23.376 "supported_io_types": { 00:07:23.376 "read": true, 00:07:23.376 "write": true, 00:07:23.376 "unmap": true, 00:07:23.376 "flush": true, 00:07:23.376 "reset": true, 00:07:23.376 "nvme_admin": false, 00:07:23.376 "nvme_io": false, 00:07:23.376 "nvme_io_md": false, 00:07:23.376 "write_zeroes": true, 00:07:23.376 "zcopy": false, 00:07:23.376 "get_zone_info": false, 00:07:23.376 "zone_management": false, 00:07:23.376 "zone_append": false, 00:07:23.376 "compare": true, 00:07:23.376 "compare_and_write": false, 00:07:23.376 "abort": true, 00:07:23.376 "seek_hole": false, 00:07:23.376 "seek_data": false, 00:07:23.376 "copy": true, 00:07:23.376 "nvme_iov_md": false 00:07:23.376 }, 00:07:23.376 "driver_specific": { 00:07:23.376 "gpt": { 00:07:23.376 "base_bdev": "Nvme1n1", 00:07:23.376 "offset_blocks": 655360, 00:07:23.376 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:23.376 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:23.376 "partition_name": "SPDK_TEST_second" 00:07:23.376 } 00:07:23.376 } 00:07:23.376 } 00:07:23.376 ]' 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 73681 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 73681 ']' 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 73681 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73681 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.376 killing process with pid 73681 00:07:23.376 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73681' 00:07:23.636 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 73681 00:07:23.636 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 73681 00:07:23.636 00:07:23.636 real 0m1.766s 00:07:23.636 user 0m1.946s 00:07:23.636 sys 0m0.337s 00:07:23.636 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.636 ************************************ 00:07:23.636 END TEST bdev_gpt_uuid 00:07:23.636 ************************************ 00:07:23.636 04:56:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:23.895 04:56:52 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:23.895 04:56:52 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:23.895 04:56:52 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:23.895 04:56:52 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:23.895 04:56:52 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:23.895 04:56:52 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:23.895 04:56:52 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:23.895 04:56:52 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:23.895 04:56:52 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:24.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:24.415 Waiting for block devices as requested 00:07:24.415 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:24.415 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:24.415 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:24.415 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:29.693 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:29.693 04:56:58 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:29.693 04:56:58 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:29.953 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:29.953 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:29.953 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:29.953 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:29.953 04:56:59 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:29.953 00:07:29.953 real 0m49.349s 00:07:29.953 user 1m2.630s 00:07:29.953 sys 0m7.799s 00:07:29.953 04:56:59 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.953 ************************************ 00:07:29.953 END TEST blockdev_nvme_gpt 00:07:29.953 ************************************ 00:07:29.953 04:56:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.953 04:56:59 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:29.953 04:56:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.953 04:56:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.953 04:56:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.953 ************************************ 00:07:29.953 START TEST nvme 00:07:29.953 ************************************ 00:07:29.953 04:56:59 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:29.953 * Looking for test storage... 00:07:29.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:29.953 04:56:59 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.953 04:56:59 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.953 04:56:59 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.953 04:56:59 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.953 04:56:59 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.953 04:56:59 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.953 04:56:59 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.953 04:56:59 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.953 04:56:59 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.953 04:56:59 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.953 04:56:59 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.953 04:56:59 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.953 04:56:59 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.214 04:56:59 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.214 04:56:59 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.214 04:56:59 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:30.214 04:56:59 nvme -- scripts/common.sh@345 -- # : 1 00:07:30.214 04:56:59 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.214 04:56:59 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.214 04:56:59 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:30.214 04:56:59 nvme -- scripts/common.sh@353 -- # local d=1 00:07:30.214 04:56:59 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.214 04:56:59 nvme -- scripts/common.sh@355 -- # echo 1 00:07:30.214 04:56:59 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.214 04:56:59 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:30.214 04:56:59 nvme -- scripts/common.sh@353 -- # local d=2 00:07:30.214 04:56:59 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.214 04:56:59 nvme -- scripts/common.sh@355 -- # echo 2 00:07:30.214 04:56:59 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.214 04:56:59 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.214 04:56:59 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.214 04:56:59 nvme -- scripts/common.sh@368 -- # return 0 00:07:30.214 04:56:59 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.214 04:56:59 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.214 --rc genhtml_branch_coverage=1 00:07:30.214 --rc genhtml_function_coverage=1 00:07:30.214 --rc genhtml_legend=1 00:07:30.214 --rc geninfo_all_blocks=1 00:07:30.214 --rc geninfo_unexecuted_blocks=1 00:07:30.214 00:07:30.214 ' 00:07:30.214 04:56:59 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.214 --rc genhtml_branch_coverage=1 00:07:30.214 --rc genhtml_function_coverage=1 00:07:30.214 --rc genhtml_legend=1 00:07:30.214 --rc geninfo_all_blocks=1 00:07:30.214 --rc geninfo_unexecuted_blocks=1 00:07:30.214 00:07:30.214 ' 00:07:30.214 04:56:59 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.214 --rc genhtml_branch_coverage=1 00:07:30.214 --rc genhtml_function_coverage=1 00:07:30.214 --rc genhtml_legend=1 00:07:30.214 --rc geninfo_all_blocks=1 00:07:30.214 --rc geninfo_unexecuted_blocks=1 00:07:30.214 00:07:30.214 ' 00:07:30.214 04:56:59 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.214 --rc genhtml_branch_coverage=1 00:07:30.214 --rc genhtml_function_coverage=1 00:07:30.214 --rc genhtml_legend=1 00:07:30.214 --rc geninfo_all_blocks=1 00:07:30.214 --rc geninfo_unexecuted_blocks=1 00:07:30.214 00:07:30.214 ' 00:07:30.214 04:56:59 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:30.474 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:31.045 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.045 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.045 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.045 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.306 04:57:00 nvme -- nvme/nvme.sh@79 -- # uname 00:07:31.306 04:57:00 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:31.306 04:57:00 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:31.306 04:57:00 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:31.306 04:57:00 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:31.306 04:57:00 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:31.306 04:57:00 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:31.306 Waiting for stub to ready for secondary processes... 00:07:31.306 04:57:00 nvme -- common/autotest_common.sh@1075 -- # stubpid=74304 00:07:31.306 04:57:00 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:31.306 04:57:00 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:31.306 04:57:00 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74304 ]] 00:07:31.306 04:57:00 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:31.306 04:57:00 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:31.306 [2024-11-28 04:57:00.382317] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:31.306 [2024-11-28 04:57:00.382429] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:31.875 [2024-11-28 04:57:01.137130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.875 [2024-11-28 04:57:01.149812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.875 [2024-11-28 04:57:01.150101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.875 [2024-11-28 04:57:01.150171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.135 [2024-11-28 04:57:01.160957] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:32.135 [2024-11-28 04:57:01.160993] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:32.135 [2024-11-28 04:57:01.173265] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:32.135 [2024-11-28 04:57:01.173421] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:32.135 [2024-11-28 04:57:01.174021] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:32.135 [2024-11-28 04:57:01.174147] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:32.135 [2024-11-28 04:57:01.174336] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:32.135 [2024-11-28 04:57:01.175285] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:32.135 [2024-11-28 04:57:01.175641] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:32.135 [2024-11-28 04:57:01.175814] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:32.135 [2024-11-28 04:57:01.178964] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:32.135 [2024-11-28 04:57:01.179385] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:32.135 [2024-11-28 04:57:01.179559] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:32.135 [2024-11-28 04:57:01.179669] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:32.135 [2024-11-28 04:57:01.179781] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:32.135 done. 00:07:32.135 04:57:01 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:32.135 04:57:01 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:32.135 04:57:01 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:32.135 04:57:01 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:32.135 04:57:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.135 04:57:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:32.135 ************************************ 00:07:32.135 START TEST nvme_reset 00:07:32.135 ************************************ 00:07:32.135 04:57:01 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:32.393 Initializing NVMe Controllers 00:07:32.393 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:32.393 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:32.393 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:32.393 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:32.393 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:32.393 00:07:32.393 real 0m0.180s 00:07:32.393 user 0m0.060s 00:07:32.393 sys 0m0.076s 00:07:32.393 04:57:01 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.393 04:57:01 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:32.393 ************************************ 00:07:32.393 END TEST nvme_reset 00:07:32.393 ************************************ 00:07:32.393 04:57:01 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:32.393 04:57:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.393 04:57:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.393 04:57:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:32.394 ************************************ 00:07:32.394 START TEST nvme_identify 00:07:32.394 ************************************ 00:07:32.394 04:57:01 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:32.394 04:57:01 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:32.394 04:57:01 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:32.394 04:57:01 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:32.394 04:57:01 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:32.394 04:57:01 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:32.394 04:57:01 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:32.394 04:57:01 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:32.394 04:57:01 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:32.394 04:57:01 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:32.394 04:57:01 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:32.394 04:57:01 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:32.394 04:57:01 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:32.656 ===================================================== 00:07:32.656 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:32.656 ===================================================== 00:07:32.656 Controller Capabilities/Features 00:07:32.656 ================================ 00:07:32.656 Vendor ID: 1b36 00:07:32.656 Subsystem Vendor ID: 1af4 00:07:32.656 Serial Number: 12343 00:07:32.656 Model Number: QEMU NVMe Ctrl 00:07:32.656 Firmware Version: 8.0.0 00:07:32.656 Recommended Arb Burst: 6 00:07:32.656 IEEE OUI Identifier: 00 54 52 00:07:32.656 Multi-path I/O 00:07:32.656 May have multiple subsystem ports: No 00:07:32.656 May have multiple controllers: Yes 00:07:32.656 Associated with SR-IOV VF: No 00:07:32.656 Max Data Transfer Size: 524288 00:07:32.656 Max Number of Namespaces: 256 00:07:32.656 Max Number of I/O Queues: 64 00:07:32.656 NVMe Specification Version (VS): 1.4 00:07:32.656 NVMe Specification Version (Identify): 1.4 00:07:32.656 Maximum Queue Entries: 2048 00:07:32.656 Contiguous Queues Required: Yes 00:07:32.656 Arbitration Mechanisms Supported 00:07:32.656 Weighted Round Robin: Not Supported 00:07:32.656 Vendor Specific: Not Supported 00:07:32.656 Reset Timeout: 7500 ms 00:07:32.656 Doorbell Stride: 4 bytes 00:07:32.656 NVM Subsystem Reset: Not Supported 00:07:32.656 Command Sets Supported 00:07:32.656 NVM Command Set: Supported 00:07:32.656 Boot Partition: Not Supported 00:07:32.656 Memory Page Size Minimum: 4096 bytes 00:07:32.656 Memory Page Size Maximum: 65536 bytes 00:07:32.656 Persistent Memory Region: Not Supported 00:07:32.656 Optional Asynchronous Events Supported 00:07:32.656 Namespace Attribute Notices: Supported 00:07:32.656 Firmware Activation Notices: Not Supported 00:07:32.656 ANA Change Notices: Not Supported 00:07:32.656 PLE Aggregate Log Change Notices: Not Supported 00:07:32.656 LBA Status Info Alert Notices: Not Supported 00:07:32.656 EGE Aggregate Log Change Notices: Not Supported 00:07:32.656 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.656 Zone Descriptor Change Notices: Not Supported 00:07:32.656 Discovery Log Change Notices: Not Supported 00:07:32.656 Controller Attributes 00:07:32.657 128-bit Host Identifier: Not Supported 00:07:32.657 Non-Operational Permissive Mode: Not Supported 00:07:32.657 NVM Sets: Not Supported 00:07:32.657 Read Recovery Levels: Not Supported 00:07:32.657 Endurance Groups: Supported 00:07:32.657 Predictable Latency Mode: Not Supported 00:07:32.657 Traffic Based Keep ALive: Not Supported 00:07:32.657 Namespace Granularity: Not Supported 00:07:32.657 SQ Associations: Not Supported 00:07:32.657 UUID List: Not Supported 00:07:32.657 Multi-Domain Subsystem: Not Supported 00:07:32.657 Fixed Capacity Management: Not Supported 00:07:32.657 Variable Capacity Management: Not Supported 00:07:32.657 Delete Endurance Group: Not Supported 00:07:32.657 Delete NVM Set: Not Supported 00:07:32.657 Extended LBA Formats Supported: Supported 00:07:32.657 Flexible Data Placement Supported: Supported 00:07:32.657 00:07:32.657 Controller Memory Buffer Support 00:07:32.657 ================================ 00:07:32.657 Supported: No 00:07:32.657 00:07:32.657 Persistent Memory Region Support 00:07:32.657 ================================ 00:07:32.657 Supported: No 00:07:32.657 00:07:32.657 Admin Command Set Attributes 00:07:32.657 ============================ 00:07:32.657 Security Send/Receive: Not Supported 00:07:32.657 Format NVM: Supported 00:07:32.657 Firmware Activate/Download: Not Supported 00:07:32.657 Namespace Management: Supported 00:07:32.657 Device Self-Test: Not Supported 00:07:32.657 Directives: Supported 00:07:32.657 NVMe-MI: Not Supported 00:07:32.657 Virtualization Management: Not Supported 00:07:32.657 Doorbell Buffer Config: Supported 00:07:32.657 Get LBA Status Capability: Not Supported 00:07:32.657 Command & Feature Lockdown Capability: Not Supported 00:07:32.657 Abort Command Limit: 4 00:07:32.657 Async Event Request Limit: 4 00:07:32.657 Number of Firmware Slots: N/A 00:07:32.657 Firmware Slot 1 Read-Only: N/A 00:07:32.657 Firmware Activation Without Reset: N/A 00:07:32.657 Multiple Update Detection Support: N/A 00:07:32.657 Firmware Update Granulari[2024-11-28 04:57:01.785593] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74326 terminated unexpected 00:07:32.657 ty: No Information Provided 00:07:32.657 Per-Namespace SMART Log: Yes 00:07:32.657 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.657 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:32.657 Command Effects Log Page: Supported 00:07:32.657 Get Log Page Extended Data: Supported 00:07:32.657 Telemetry Log Pages: Not Supported 00:07:32.657 Persistent Event Log Pages: Not Supported 00:07:32.657 Supported Log Pages Log Page: May Support 00:07:32.657 Commands Supported & Effects Log Page: Not Supported 00:07:32.657 Feature Identifiers & Effects Log Page:May Support 00:07:32.657 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.657 Data Area 4 for Telemetry Log: Not Supported 00:07:32.657 Error Log Page Entries Supported: 1 00:07:32.657 Keep Alive: Not Supported 00:07:32.657 00:07:32.657 NVM Command Set Attributes 00:07:32.657 ========================== 00:07:32.657 Submission Queue Entry Size 00:07:32.657 Max: 64 00:07:32.657 Min: 64 00:07:32.657 Completion Queue Entry Size 00:07:32.657 Max: 16 00:07:32.657 Min: 16 00:07:32.657 Number of Namespaces: 256 00:07:32.657 Compare Command: Supported 00:07:32.657 Write Uncorrectable Command: Not Supported 00:07:32.657 Dataset Management Command: Supported 00:07:32.657 Write Zeroes Command: Supported 00:07:32.657 Set Features Save Field: Supported 00:07:32.657 Reservations: Not Supported 00:07:32.657 Timestamp: Supported 00:07:32.657 Copy: Supported 00:07:32.657 Volatile Write Cache: Present 00:07:32.657 Atomic Write Unit (Normal): 1 00:07:32.657 Atomic Write Unit (PFail): 1 00:07:32.657 Atomic Compare & Write Unit: 1 00:07:32.657 Fused Compare & Write: Not Supported 00:07:32.657 Scatter-Gather List 00:07:32.657 SGL Command Set: Supported 00:07:32.657 SGL Keyed: Not Supported 00:07:32.657 SGL Bit Bucket Descriptor: Not Supported 00:07:32.657 SGL Metadata Pointer: Not Supported 00:07:32.657 Oversized SGL: Not Supported 00:07:32.657 SGL Metadata Address: Not Supported 00:07:32.657 SGL Offset: Not Supported 00:07:32.657 Transport SGL Data Block: Not Supported 00:07:32.657 Replay Protected Memory Block: Not Supported 00:07:32.657 00:07:32.657 Firmware Slot Information 00:07:32.657 ========================= 00:07:32.657 Active slot: 1 00:07:32.657 Slot 1 Firmware Revision: 1.0 00:07:32.657 00:07:32.657 00:07:32.657 Commands Supported and Effects 00:07:32.657 ============================== 00:07:32.657 Admin Commands 00:07:32.657 -------------- 00:07:32.657 Delete I/O Submission Queue (00h): Supported 00:07:32.657 Create I/O Submission Queue (01h): Supported 00:07:32.657 Get Log Page (02h): Supported 00:07:32.657 Delete I/O Completion Queue (04h): Supported 00:07:32.657 Create I/O Completion Queue (05h): Supported 00:07:32.657 Identify (06h): Supported 00:07:32.657 Abort (08h): Supported 00:07:32.657 Set Features (09h): Supported 00:07:32.657 Get Features (0Ah): Supported 00:07:32.657 Asynchronous Event Request (0Ch): Supported 00:07:32.657 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.657 Directive Send (19h): Supported 00:07:32.657 Directive Receive (1Ah): Supported 00:07:32.657 Virtualization Management (1Ch): Supported 00:07:32.657 Doorbell Buffer Config (7Ch): Supported 00:07:32.657 Format NVM (80h): Supported LBA-Change 00:07:32.657 I/O Commands 00:07:32.657 ------------ 00:07:32.657 Flush (00h): Supported LBA-Change 00:07:32.657 Write (01h): Supported LBA-Change 00:07:32.657 Read (02h): Supported 00:07:32.657 Compare (05h): Supported 00:07:32.657 Write Zeroes (08h): Supported LBA-Change 00:07:32.657 Dataset Management (09h): Supported LBA-Change 00:07:32.657 Unknown (0Ch): Supported 00:07:32.657 Unknown (12h): Supported 00:07:32.657 Copy (19h): Supported LBA-Change 00:07:32.657 Unknown (1Dh): Supported LBA-Change 00:07:32.657 00:07:32.657 Error Log 00:07:32.657 ========= 00:07:32.657 00:07:32.657 Arbitration 00:07:32.657 =========== 00:07:32.657 Arbitration Burst: no limit 00:07:32.657 00:07:32.657 Power Management 00:07:32.657 ================ 00:07:32.658 Number of Power States: 1 00:07:32.658 Current Power State: Power State #0 00:07:32.658 Power State #0: 00:07:32.658 Max Power: 25.00 W 00:07:32.658 Non-Operational State: Operational 00:07:32.658 Entry Latency: 16 microseconds 00:07:32.658 Exit Latency: 4 microseconds 00:07:32.658 Relative Read Throughput: 0 00:07:32.658 Relative Read Latency: 0 00:07:32.658 Relative Write Throughput: 0 00:07:32.658 Relative Write Latency: 0 00:07:32.658 Idle Power: Not Reported 00:07:32.658 Active Power: Not Reported 00:07:32.658 Non-Operational Permissive Mode: Not Supported 00:07:32.658 00:07:32.658 Health Information 00:07:32.658 ================== 00:07:32.658 Critical Warnings: 00:07:32.658 Available Spare Space: OK 00:07:32.658 Temperature: OK 00:07:32.658 Device Reliability: OK 00:07:32.658 Read Only: No 00:07:32.658 Volatile Memory Backup: OK 00:07:32.658 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.658 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.658 Available Spare: 0% 00:07:32.658 Available Spare Threshold: 0% 00:07:32.658 Life Percentage Used: 0% 00:07:32.658 Data Units Read: 841 00:07:32.658 Data Units Written: 770 00:07:32.658 Host Read Commands: 40964 00:07:32.658 Host Write Commands: 40387 00:07:32.658 Controller Busy Time: 0 minutes 00:07:32.658 Power Cycles: 0 00:07:32.658 Power On Hours: 0 hours 00:07:32.658 Unsafe Shutdowns: 0 00:07:32.658 Unrecoverable Media Errors: 0 00:07:32.658 Lifetime Error Log Entries: 0 00:07:32.658 Warning Temperature Time: 0 minutes 00:07:32.658 Critical Temperature Time: 0 minutes 00:07:32.658 00:07:32.658 Number of Queues 00:07:32.658 ================ 00:07:32.658 Number of I/O Submission Queues: 64 00:07:32.658 Number of I/O Completion Queues: 64 00:07:32.658 00:07:32.658 ZNS Specific Controller Data 00:07:32.658 ============================ 00:07:32.658 Zone Append Size Limit: 0 00:07:32.658 00:07:32.658 00:07:32.658 Active Namespaces 00:07:32.658 ================= 00:07:32.658 Namespace ID:1 00:07:32.658 Error Recovery Timeout: Unlimited 00:07:32.658 Command Set Identifier: NVM (00h) 00:07:32.658 Deallocate: Supported 00:07:32.658 Deallocated/Unwritten Error: Supported 00:07:32.658 Deallocated Read Value: All 0x00 00:07:32.658 Deallocate in Write Zeroes: Not Supported 00:07:32.658 Deallocated Guard Field: 0xFFFF 00:07:32.658 Flush: Supported 00:07:32.658 Reservation: Not Supported 00:07:32.658 Namespace Sharing Capabilities: Multiple Controllers 00:07:32.658 Size (in LBAs): 262144 (1GiB) 00:07:32.658 Capacity (in LBAs): 262144 (1GiB) 00:07:32.658 Utilization (in LBAs): 262144 (1GiB) 00:07:32.658 Thin Provisioning: Not Supported 00:07:32.658 Per-NS Atomic Units: No 00:07:32.658 Maximum Single Source Range Length: 128 00:07:32.658 Maximum Copy Length: 128 00:07:32.658 Maximum Source Range Count: 128 00:07:32.658 NGUID/EUI64 Never Reused: No 00:07:32.658 Namespace Write Protected: No 00:07:32.658 Endurance group ID: 1 00:07:32.658 Number of LBA Formats: 8 00:07:32.658 Current LBA Format: LBA Format #04 00:07:32.658 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.658 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.658 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.658 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.658 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.658 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.658 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.658 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.658 00:07:32.658 Get Feature FDP: 00:07:32.658 ================ 00:07:32.658 Enabled: Yes 00:07:32.658 FDP configuration index: 0 00:07:32.658 00:07:32.658 FDP configurations log page 00:07:32.658 =========================== 00:07:32.658 Number of FDP configurations: 1 00:07:32.658 Version: 0 00:07:32.658 Size: 112 00:07:32.658 FDP Configuration Descriptor: 0 00:07:32.658 Descriptor Size: 96 00:07:32.658 Reclaim Group Identifier format: 2 00:07:32.658 FDP Volatile Write Cache: Not Present 00:07:32.658 FDP Configuration: Valid 00:07:32.658 Vendor Specific Size: 0 00:07:32.658 Number of Reclaim Groups: 2 00:07:32.658 Number of Recalim Unit Handles: 8 00:07:32.658 Max Placement Identifiers: 128 00:07:32.658 Number of Namespaces Suppprted: 256 00:07:32.658 Reclaim unit Nominal Size: 6000000 bytes 00:07:32.658 Estimated Reclaim Unit Time Limit: Not Reported 00:07:32.658 RUH Desc #000: RUH Type: Initially Isolated 00:07:32.658 RUH Desc #001: RUH Type: Initially Isolated 00:07:32.658 RUH Desc #002: RUH Type: Initially Isolated 00:07:32.658 RUH Desc #003: RUH Type: Initially Isolated 00:07:32.658 RUH Desc #004: RUH Type: Initially Isolated 00:07:32.658 RUH Desc #005: RUH Type: Initially Isolated 00:07:32.658 RUH Desc #006: RUH Type: Initially Isolated 00:07:32.658 RUH Desc #007: RUH Type: Initially Isolated 00:07:32.658 00:07:32.658 FDP reclaim unit handle usage log page 00:07:32.658 ==================================[2024-11-28 04:57:01.787159] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74326 terminated unexpected 00:07:32.658 ==== 00:07:32.658 Number of Reclaim Unit Handles: 8 00:07:32.658 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:32.658 RUH Usage Desc #001: RUH Attributes: Unused 00:07:32.658 RUH Usage Desc #002: RUH Attributes: Unused 00:07:32.658 RUH Usage Desc #003: RUH Attributes: Unused 00:07:32.658 RUH Usage Desc #004: RUH Attributes: Unused 00:07:32.658 RUH Usage Desc #005: RUH Attributes: Unused 00:07:32.658 RUH Usage Desc #006: RUH Attributes: Unused 00:07:32.658 RUH Usage Desc #007: RUH Attributes: Unused 00:07:32.658 00:07:32.658 FDP statistics log page 00:07:32.658 ======================= 00:07:32.658 Host bytes with metadata written: 483958784 00:07:32.658 Media bytes with metadata written: 484012032 00:07:32.658 Media bytes erased: 0 00:07:32.658 00:07:32.658 FDP events log page 00:07:32.658 =================== 00:07:32.658 Number of FDP events: 0 00:07:32.658 00:07:32.658 NVM Specific Namespace Data 00:07:32.658 =========================== 00:07:32.658 Logical Block Storage Tag Mask: 0 00:07:32.658 Protection Information Capabilities: 00:07:32.658 16b Guard Protection Information Storage Tag Support: No 00:07:32.658 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.658 Storage Tag Check Read Support: No 00:07:32.658 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.659 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.659 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.659 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.659 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.659 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.659 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.659 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.659 ===================================================== 00:07:32.659 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:32.659 ===================================================== 00:07:32.659 Controller Capabilities/Features 00:07:32.659 ================================ 00:07:32.659 Vendor ID: 1b36 00:07:32.659 Subsystem Vendor ID: 1af4 00:07:32.659 Serial Number: 12340 00:07:32.659 Model Number: QEMU NVMe Ctrl 00:07:32.659 Firmware Version: 8.0.0 00:07:32.659 Recommended Arb Burst: 6 00:07:32.659 IEEE OUI Identifier: 00 54 52 00:07:32.659 Multi-path I/O 00:07:32.659 May have multiple subsystem ports: No 00:07:32.659 May have multiple controllers: No 00:07:32.659 Associated with SR-IOV VF: No 00:07:32.659 Max Data Transfer Size: 524288 00:07:32.659 Max Number of Namespaces: 256 00:07:32.659 Max Number of I/O Queues: 64 00:07:32.659 NVMe Specification Version (VS): 1.4 00:07:32.659 NVMe Specification Version (Identify): 1.4 00:07:32.659 Maximum Queue Entries: 2048 00:07:32.659 Contiguous Queues Required: Yes 00:07:32.659 Arbitration Mechanisms Supported 00:07:32.659 Weighted Round Robin: Not Supported 00:07:32.659 Vendor Specific: Not Supported 00:07:32.659 Reset Timeout: 7500 ms 00:07:32.659 Doorbell Stride: 4 bytes 00:07:32.659 NVM Subsystem Reset: Not Supported 00:07:32.659 Command Sets Supported 00:07:32.659 NVM Command Set: Supported 00:07:32.659 Boot Partition: Not Supported 00:07:32.659 Memory Page Size Minimum: 4096 bytes 00:07:32.659 Memory Page Size Maximum: 65536 bytes 00:07:32.659 Persistent Memory Region: Not Supported 00:07:32.659 Optional Asynchronous Events Supported 00:07:32.659 Namespace Attribute Notices: Supported 00:07:32.659 Firmware Activation Notices: Not Supported 00:07:32.659 ANA Change Notices: Not Supported 00:07:32.659 PLE Aggregate Log Change Notices: Not Supported 00:07:32.659 LBA Status Info Alert Notices: Not Supported 00:07:32.659 EGE Aggregate Log Change Notices: Not Supported 00:07:32.659 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.659 Zone Descriptor Change Notices: Not Supported 00:07:32.659 Discovery Log Change Notices: Not Supported 00:07:32.659 Controller Attributes 00:07:32.659 128-bit Host Identifier: Not Supported 00:07:32.659 Non-Operational Permissive Mode: Not Supported 00:07:32.659 NVM Sets: Not Supported 00:07:32.659 Read Recovery Levels: Not Supported 00:07:32.659 Endurance Groups: Not Supported 00:07:32.659 Predictable Latency Mode: Not Supported 00:07:32.659 Traffic Based Keep ALive: Not Supported 00:07:32.659 Namespace Granularity: Not Supported 00:07:32.659 SQ Associations: Not Supported 00:07:32.659 UUID List: Not Supported 00:07:32.659 Multi-Domain Subsystem: Not Supported 00:07:32.659 Fixed Capacity Management: Not Supported 00:07:32.659 Variable Capacity Management: Not Supported 00:07:32.659 Delete Endurance Group: Not Supported 00:07:32.659 Delete NVM Set: Not Supported 00:07:32.659 Extended LBA Formats Supported: Supported 00:07:32.659 Flexible Data Placement Supported: Not Supported 00:07:32.659 00:07:32.659 Controller Memory Buffer Support 00:07:32.659 ================================ 00:07:32.659 Supported: No 00:07:32.659 00:07:32.659 Persistent Memory Region Support 00:07:32.659 ================================ 00:07:32.659 Supported: No 00:07:32.659 00:07:32.659 Admin Command Set Attributes 00:07:32.659 ============================ 00:07:32.659 Security Send/Receive: Not Supported 00:07:32.659 Format NVM: Supported 00:07:32.659 Firmware Activate/Download: Not Supported 00:07:32.659 Namespace Management: Supported 00:07:32.659 Device Self-Test: Not Supported 00:07:32.659 Directives: Supported 00:07:32.659 NVMe-MI: Not Supported 00:07:32.659 Virtualization Management: Not Supported 00:07:32.659 Doorbell Buffer Config: Supported 00:07:32.659 Get LBA Status Capability: Not Supported 00:07:32.659 Command & Feature Lockdown Capability: Not Supported 00:07:32.659 Abort Command Limit: 4 00:07:32.659 Async Event Request Limit: 4 00:07:32.659 Number of Firmware Slots: N/A 00:07:32.659 Firmware Slot 1 Read-Only: N/A 00:07:32.659 Firmware Activation Without Reset: N/A 00:07:32.659 Multiple Update Detection Support: N/A 00:07:32.659 Firmware Update Granularity: No Information Provided 00:07:32.659 Per-Namespace SMART Log: Yes 00:07:32.659 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.659 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:32.659 Command Effects Log Page: Supported 00:07:32.659 Get Log Page Extended Data: Supported 00:07:32.659 Telemetry Log Pages: Not Supported 00:07:32.659 Persistent Event Log Pages: Not Supported 00:07:32.659 Supported Log Pages Log Page: May Support 00:07:32.659 Commands Supported & Effects Log Page: Not Supported 00:07:32.659 Feature Identifiers & Effects Log Page:May Support 00:07:32.659 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.659 Data Area 4 for Telemetry Log: Not Supported 00:07:32.659 Error Log Page Entries Supported: 1 00:07:32.659 Keep Alive: Not Supported 00:07:32.659 00:07:32.659 NVM Command Set Attributes 00:07:32.659 ========================== 00:07:32.659 Submission Queue Entry Size 00:07:32.659 Max: 64 00:07:32.659 Min: 64 00:07:32.659 Completion Queue Entry Size 00:07:32.659 Max: 16 00:07:32.659 Min: 16 00:07:32.659 Number of Namespaces: 256 00:07:32.659 Compare Command: Supported 00:07:32.659 Write Uncorrectable Command: Not Supported 00:07:32.659 Dataset Management Command: Supported 00:07:32.659 Write Zeroes Command: Supported 00:07:32.659 Set Features Save Field: Supported 00:07:32.659 Reservations: Not Supported 00:07:32.659 Timestamp: Supported 00:07:32.659 Copy: Supported 00:07:32.659 Volatile Write Cache: Present 00:07:32.659 Atomic Write Unit (Normal): 1 00:07:32.659 Atomic Write Unit (PFail): 1 00:07:32.659 Atomic Compare & Write Unit: 1 00:07:32.659 Fused Compare & Write: Not Supported 00:07:32.660 Scatter-Gather List 00:07:32.660 SGL Command Set: Supported 00:07:32.660 SGL Keyed: Not Supported 00:07:32.660 SGL Bit Bucket Descriptor: Not Supported 00:07:32.660 SGL Metadata Pointer: Not Supported 00:07:32.660 Oversized SGL: Not Supported 00:07:32.660 SGL Metadata Address: Not Supported 00:07:32.660 SGL Offset: Not Supported 00:07:32.660 Transport SGL Data Block: Not Supported 00:07:32.660 Replay Protected Memory Block: Not Supported 00:07:32.660 00:07:32.660 Firmware Slot Information 00:07:32.660 ========================= 00:07:32.660 Active slot: 1 00:07:32.660 Slot 1 Firmware Revision: 1.0 00:07:32.660 00:07:32.660 00:07:32.660 Commands Supported and Effects 00:07:32.660 ============================== 00:07:32.660 Admin Commands 00:07:32.660 -------------- 00:07:32.660 Delete I/O Submission Queue (00h): Supported 00:07:32.660 Create I/O Submission Queue (01h): Supported 00:07:32.660 Get Log Page (02h): Supported 00:07:32.660 Delete I/O Completion Queue (04h): Supported 00:07:32.660 Create I/O Completion Queue (05h): Supported 00:07:32.660 Identify (06h): Supported 00:07:32.660 Abort (08h): Supported 00:07:32.660 Set Features (09h): Supported 00:07:32.660 Get Features (0Ah): Supported 00:07:32.660 Asynchronous Event Request (0Ch): Supported 00:07:32.660 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.660 Directive Send (19h): Supported 00:07:32.660 Directive Receive (1Ah): Supported 00:07:32.660 Virtualization Management (1Ch): Supported 00:07:32.660 Doorbell Buffer Config (7Ch): Supported 00:07:32.660 Format NVM (80h): Supported LBA-Change 00:07:32.660 I/O Commands 00:07:32.660 ------------ 00:07:32.660 Flush (00h): Supported LBA-Change 00:07:32.660 Write (01h): Supported LBA-Change 00:07:32.660 Read (02h): Supported 00:07:32.660 Compare (05h): Supported 00:07:32.660 Write Zeroes (08h): Supported LBA-Change 00:07:32.660 Dataset Management (09h): Supported LBA-Change 00:07:32.660 Unknown (0Ch): Supported 00:07:32.660 Unknown (12h): Supported 00:07:32.660 Copy (19h): Supported LBA-Change 00:07:32.660 Unknown (1Dh): Supported LBA-Change 00:07:32.660 00:07:32.660 Error Log 00:07:32.660 ========= 00:07:32.660 00:07:32.660 Arbitration 00:07:32.660 =========== 00:07:32.660 Arbitration Burst: no limit 00:07:32.660 00:07:32.660 Power Management 00:07:32.660 ================ 00:07:32.660 Number of Power States: 1 00:07:32.660 Current Power State: Power State #0 00:07:32.660 Power State #0: 00:07:32.660 Max Power: 25.00 W 00:07:32.660 Non-Operational State: Operational 00:07:32.660 Entry Latency: 16 microseconds 00:07:32.660 Exit Latency: 4 microseconds 00:07:32.660 Relative Read Throughput: 0 00:07:32.660 Relative Read Latency: 0 00:07:32.660 Relative Write Throughput: 0 00:07:32.660 Relative Write Latency: 0 00:07:32.660 Idle Power: Not Reported 00:07:32.660 Active Power: Not Reported 00:07:32.660 Non-Operational Permissive Mode: Not Supported 00:07:32.660 00:07:32.660 Health Information 00:07:32.660 ================== 00:07:32.660 Critical Warnings: 00:07:32.660 Available Spare Space: OK 00:07:32.660 Temperature: OK 00:07:32.660 Device Reliability: OK 00:07:32.660 Read Only: No 00:07:32.660 Volatile Memory Backup: OK 00:07:32.660 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.660 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.660 Available Spare: 0% 00:07:32.660 Available Spare Threshold: 0% 00:07:32.660 Life Percentage Used: 0% 00:07:32.660 Data Units Read: 695 00:07:32.660 Data Units Written: 623 00:07:32.660 Host Read Commands: 39590 00:07:32.660 Host Write Commands: 39376 00:07:32.660 Controller Busy Time: 0 minutes 00:07:32.660 Power Cycles: 0 00:07:32.660 Power On Hours: 0 hours 00:07:32.660 Unsafe Shutdowns: 0 00:07:32.660 Unrecoverable Media Errors: 0 00:07:32.660 Lifetime Error Log Entries: 0 00:07:32.660 Warning Temperature Time: 0 minutes 00:07:32.660 Critical Temperature Time: 0 minutes 00:07:32.660 00:07:32.660 Number of Queues 00:07:32.660 ================ 00:07:32.660 Number of I/O Submission Queues: 64 00:07:32.660 Number of I/O Completion Queues: 64 00:07:32.660 00:07:32.660 ZNS Specific Controller Data 00:07:32.660 ============================ 00:07:32.660 Zone Append Size Limit: 0 00:07:32.660 00:07:32.660 00:07:32.660 Active Namespaces 00:07:32.660 ================= 00:07:32.660 Namespace ID:1 00:07:32.660 Error Recovery Timeout: Unlimited 00:07:32.660 Command Set Identifier: NVM (00h) 00:07:32.660 Deallocate: Supported 00:07:32.660 Deallocated/Unwritten Error: Supported 00:07:32.660 Deallocated Read Value: All 0x00 00:07:32.660 Deallocate in Write Zeroes: Not Supported 00:07:32.660 Deallocated Guard Field: 0xFFFF 00:07:32.660 Flush: Supported 00:07:32.660 Reservation: Not Supported 00:07:32.660 Metadata Transferred as: Separate Metadata Buffer 00:07:32.660 Namespace Sharing Capabilities: Private 00:07:32.660 Size (in LBAs): 1548666 (5GiB) 00:07:32.660 Capacity (in LBAs): 1548666 (5GiB) 00:07:32.660 Utilization (in LBAs): 1548666 (5GiB) 00:07:32.660 Thin Provisioning: Not Supported 00:07:32.660 Per-NS Atomic Units: No 00:07:32.660 Maximum Single Source Range Length: 128 00:07:32.660 Maximum Copy Length: 128 00:07:32.660 Maximum Source Range Count: 128 00:07:32.660 NGUID/EUI64 Never Reused: No 00:07:32.660 Namespace Write Protected: No 00:07:32.660 Number of LBA Formats: 8 00:07:32.660 Current LBA Format: [2024-11-28 04:57:01.787852] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74326 terminated unexpected 00:07:32.660 LBA Format #07 00:07:32.660 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.660 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.660 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.660 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.660 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.660 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.660 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.660 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.660 00:07:32.660 NVM Specific Namespace Data 00:07:32.660 =========================== 00:07:32.660 Logical Block Storage Tag Mask: 0 00:07:32.660 Protection Information Capabilities: 00:07:32.660 16b Guard Protection Information Storage Tag Support: No 00:07:32.660 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.660 Storage Tag Check Read Support: No 00:07:32.660 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.660 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.661 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.661 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.661 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.661 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.661 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.661 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.661 ===================================================== 00:07:32.661 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:32.661 ===================================================== 00:07:32.661 Controller Capabilities/Features 00:07:32.661 ================================ 00:07:32.661 Vendor ID: 1b36 00:07:32.661 Subsystem Vendor ID: 1af4 00:07:32.661 Serial Number: 12341 00:07:32.661 Model Number: QEMU NVMe Ctrl 00:07:32.661 Firmware Version: 8.0.0 00:07:32.661 Recommended Arb Burst: 6 00:07:32.661 IEEE OUI Identifier: 00 54 52 00:07:32.661 Multi-path I/O 00:07:32.661 May have multiple subsystem ports: No 00:07:32.661 May have multiple controllers: No 00:07:32.661 Associated with SR-IOV VF: No 00:07:32.661 Max Data Transfer Size: 524288 00:07:32.661 Max Number of Namespaces: 256 00:07:32.661 Max Number of I/O Queues: 64 00:07:32.661 NVMe Specification Version (VS): 1.4 00:07:32.661 NVMe Specification Version (Identify): 1.4 00:07:32.661 Maximum Queue Entries: 2048 00:07:32.661 Contiguous Queues Required: Yes 00:07:32.661 Arbitration Mechanisms Supported 00:07:32.661 Weighted Round Robin: Not Supported 00:07:32.661 Vendor Specific: Not Supported 00:07:32.661 Reset Timeout: 7500 ms 00:07:32.661 Doorbell Stride: 4 bytes 00:07:32.661 NVM Subsystem Reset: Not Supported 00:07:32.661 Command Sets Supported 00:07:32.661 NVM Command Set: Supported 00:07:32.661 Boot Partition: Not Supported 00:07:32.661 Memory Page Size Minimum: 4096 bytes 00:07:32.661 Memory Page Size Maximum: 65536 bytes 00:07:32.661 Persistent Memory Region: Not Supported 00:07:32.661 Optional Asynchronous Events Supported 00:07:32.661 Namespace Attribute Notices: Supported 00:07:32.661 Firmware Activation Notices: Not Supported 00:07:32.661 ANA Change Notices: Not Supported 00:07:32.661 PLE Aggregate Log Change Notices: Not Supported 00:07:32.661 LBA Status Info Alert Notices: Not Supported 00:07:32.661 EGE Aggregate Log Change Notices: Not Supported 00:07:32.661 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.661 Zone Descriptor Change Notices: Not Supported 00:07:32.661 Discovery Log Change Notices: Not Supported 00:07:32.661 Controller Attributes 00:07:32.661 128-bit Host Identifier: Not Supported 00:07:32.661 Non-Operational Permissive Mode: Not Supported 00:07:32.661 NVM Sets: Not Supported 00:07:32.661 Read Recovery Levels: Not Supported 00:07:32.661 Endurance Groups: Not Supported 00:07:32.661 Predictable Latency Mode: Not Supported 00:07:32.661 Traffic Based Keep ALive: Not Supported 00:07:32.661 Namespace Granularity: Not Supported 00:07:32.661 SQ Associations: Not Supported 00:07:32.661 UUID List: Not Supported 00:07:32.661 Multi-Domain Subsystem: Not Supported 00:07:32.661 Fixed Capacity Management: Not Supported 00:07:32.661 Variable Capacity Management: Not Supported 00:07:32.661 Delete Endurance Group: Not Supported 00:07:32.661 Delete NVM Set: Not Supported 00:07:32.661 Extended LBA Formats Supported: Supported 00:07:32.661 Flexible Data Placement Supported: Not Supported 00:07:32.661 00:07:32.661 Controller Memory Buffer Support 00:07:32.661 ================================ 00:07:32.661 Supported: No 00:07:32.661 00:07:32.661 Persistent Memory Region Support 00:07:32.661 ================================ 00:07:32.661 Supported: No 00:07:32.661 00:07:32.661 Admin Command Set Attributes 00:07:32.661 ============================ 00:07:32.661 Security Send/Receive: Not Supported 00:07:32.661 Format NVM: Supported 00:07:32.661 Firmware Activate/Download: Not Supported 00:07:32.661 Namespace Management: Supported 00:07:32.661 Device Self-Test: Not Supported 00:07:32.661 Directives: Supported 00:07:32.661 NVMe-MI: Not Supported 00:07:32.661 Virtualization Management: Not Supported 00:07:32.661 Doorbell Buffer Config: Supported 00:07:32.661 Get LBA Status Capability: Not Supported 00:07:32.661 Command & Feature Lockdown Capability: Not Supported 00:07:32.661 Abort Command Limit: 4 00:07:32.661 Async Event Request Limit: 4 00:07:32.661 Number of Firmware Slots: N/A 00:07:32.661 Firmware Slot 1 Read-Only: N/A 00:07:32.661 Firmware Activation Without Reset: N/A 00:07:32.661 Multiple Update Detection Support: N/A 00:07:32.661 Firmware Update Granularity: No Information Provided 00:07:32.661 Per-Namespace SMART Log: Yes 00:07:32.661 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.661 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:32.661 Command Effects Log Page: Supported 00:07:32.661 Get Log Page Extended Data: Supported 00:07:32.661 Telemetry Log Pages: Not Supported 00:07:32.661 Persistent Event Log Pages: Not Supported 00:07:32.661 Supported Log Pages Log Page: May Support 00:07:32.661 Commands Supported & Effects Log Page: Not Supported 00:07:32.661 Feature Identifiers & Effects Log Page:May Support 00:07:32.661 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.661 Data Area 4 for Telemetry Log: Not Supported 00:07:32.661 Error Log Page Entries Supported: 1 00:07:32.661 Keep Alive: Not Supported 00:07:32.661 00:07:32.661 NVM Command Set Attributes 00:07:32.661 ========================== 00:07:32.661 Submission Queue Entry Size 00:07:32.661 Max: 64 00:07:32.661 Min: 64 00:07:32.661 Completion Queue Entry Size 00:07:32.661 Max: 16 00:07:32.661 Min: 16 00:07:32.661 Number of Namespaces: 256 00:07:32.661 Compare Command: Supported 00:07:32.661 Write Uncorrectable Command: Not Supported 00:07:32.661 Dataset Management Command: Supported 00:07:32.661 Write Zeroes Command: Supported 00:07:32.661 Set Features Save Field: Supported 00:07:32.661 Reservations: Not Supported 00:07:32.661 Timestamp: Supported 00:07:32.661 Copy: Supported 00:07:32.661 Volatile Write Cache: Present 00:07:32.661 Atomic Write Unit (Normal): 1 00:07:32.661 Atomic Write Unit (PFail): 1 00:07:32.661 Atomic Compare & Write Unit: 1 00:07:32.661 Fused Compare & Write: Not Supported 00:07:32.661 Scatter-Gather List 00:07:32.661 SGL Command Set: Supported 00:07:32.661 SGL Keyed: Not Supported 00:07:32.661 SGL Bit Bucket Descriptor: Not Supported 00:07:32.662 SGL Metadata Pointer: Not Supported 00:07:32.662 Oversized SGL: Not Supported 00:07:32.662 SGL Metadata Address: Not Supported 00:07:32.662 SGL Offset: Not Supported 00:07:32.662 Transport SGL Data Block: Not Supported 00:07:32.662 Replay Protected Memory Block: Not Supported 00:07:32.662 00:07:32.662 Firmware Slot Information 00:07:32.662 ========================= 00:07:32.662 Active slot: 1 00:07:32.662 Slot 1 Firmware Revision: 1.0 00:07:32.662 00:07:32.662 00:07:32.662 Commands Supported and Effects 00:07:32.662 ============================== 00:07:32.662 Admin Commands 00:07:32.662 -------------- 00:07:32.662 Delete I/O Submission Queue (00h): Supported 00:07:32.662 Create I/O Submission Queue (01h): Supported 00:07:32.662 Get Log Page (02h): Supported 00:07:32.662 Delete I/O Completion Queue (04h): Supported 00:07:32.662 Create I/O Completion Queue (05h): Supported 00:07:32.662 Identify (06h): Supported 00:07:32.662 Abort (08h): Supported 00:07:32.662 Set Features (09h): Supported 00:07:32.662 Get Features (0Ah): Supported 00:07:32.662 Asynchronous Event Request (0Ch): Supported 00:07:32.662 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.662 Directive Send (19h): Supported 00:07:32.662 Directive Receive (1Ah): Supported 00:07:32.662 Virtualization Management (1Ch): Supported 00:07:32.662 Doorbell Buffer Config (7Ch): Supported 00:07:32.662 Format NVM (80h): Supported LBA-Change 00:07:32.662 I/O Commands 00:07:32.662 ------------ 00:07:32.662 Flush (00h): Supported LBA-Change 00:07:32.662 Write (01h): Supported LBA-Change 00:07:32.662 Read (02h): Supported 00:07:32.662 Compare (05h): Supported 00:07:32.662 Write Zeroes (08h): Supported LBA-Change 00:07:32.662 Dataset Management (09h): Supported LBA-Change 00:07:32.662 Unknown (0Ch): Supported 00:07:32.662 Unknown (12h): Supported 00:07:32.662 Copy (19h): Supported LBA-Change 00:07:32.662 Unknown (1Dh): Supported LBA-Change 00:07:32.662 00:07:32.662 Error Log 00:07:32.662 ========= 00:07:32.662 00:07:32.662 Arbitration 00:07:32.662 =========== 00:07:32.662 Arbitration Burst: no limit 00:07:32.662 00:07:32.662 Power Management 00:07:32.662 ================ 00:07:32.662 Number of Power States: 1 00:07:32.662 Current Power State: Power State #0 00:07:32.662 Power State #0: 00:07:32.662 Max Power: 25.00 W 00:07:32.662 Non-Operational State: Operational 00:07:32.662 Entry Latency: 16 microseconds 00:07:32.662 Exit Latency: 4 microseconds 00:07:32.662 Relative Read Throughput: 0 00:07:32.662 Relative Read Latency: 0 00:07:32.662 Relative Write Throughput: 0 00:07:32.662 Relative Write Latency: 0 00:07:32.662 Idle Power: Not Reported 00:07:32.662 Active Power: Not Reported 00:07:32.662 Non-Operational Permissive Mode: Not Supported 00:07:32.662 00:07:32.662 Health Information 00:07:32.662 ================== 00:07:32.662 Critical Warnings: 00:07:32.662 Available Spare Space: OK 00:07:32.662 Temperature: OK 00:07:32.662 Device Reliability: OK 00:07:32.662 Read Only: No 00:07:32.662 Volatile Memory Backup: OK 00:07:32.662 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.662 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.662 Available Spare: 0% 00:07:32.662 Available Spare Threshold: 0% 00:07:32.662 Life Percentage Used: 0% 00:07:32.662 Data Units Read: 1028 00:07:32.662 Data Units Written: 893 00:07:32.662 Host Read Commands: 56287 00:07:32.662 Host Write Commands: 55048 00:07:32.662 Controller Busy Time: 0 minutes 00:07:32.662 Power Cycles: 0 00:07:32.662 Power On Hours: 0 hours 00:07:32.662 Unsafe Shutdowns: 0 00:07:32.662 Unrecoverable Media Errors: 0 00:07:32.662 Lifetime Error Log Entries: 0 00:07:32.662 Warning Temperature Time: 0 minutes 00:07:32.662 Critical Temperature Time: 0 minutes 00:07:32.662 00:07:32.662 Number of Queues 00:07:32.662 ================ 00:07:32.662 Number of I/O Submission Queues: 64 00:07:32.662 Number of I/O Completion Queues: 64 00:07:32.662 00:07:32.662 ZNS Specific Controller Data 00:07:32.662 ============================ 00:07:32.662 Zone Append Size Limit: 0 00:07:32.662 00:07:32.662 00:07:32.662 Active Namespaces 00:07:32.662 ================= 00:07:32.662 Namespace ID:1 00:07:32.662 Error Recovery Timeout: Unlimited 00:07:32.662 Command Set Identifier: NVM (00h) 00:07:32.662 Deallocate: Supported 00:07:32.662 Deallocated/Unwritten Error: Supported 00:07:32.662 Deallocated Read Value: All 0x00 00:07:32.662 Deallocate in Write Zeroes: Not Supported 00:07:32.662 Deallocated Guard Field: 0xFFFF 00:07:32.662 Flush: Supported 00:07:32.662 Reservation: Not Supported 00:07:32.662 Namespace Sharing Capabilities: Private 00:07:32.662 Size (in LBAs): 1310720 (5GiB) 00:07:32.662 Capacity (in LBAs): 1310720 (5GiB) 00:07:32.662 Utilization (in LBAs): 1310720 (5GiB) 00:07:32.662 Thin Provisioning: Not Supported 00:07:32.662 Per-NS Atomic Units: No 00:07:32.662 Maximum Single Source Range Length: 128 00:07:32.662 Maximum Copy Length: 128 00:07:32.662 Maximum Source Range Count: 128 00:07:32.662 NGUID/EUI64 Never Reused: No 00:07:32.662 Namespace Write Protected: No 00:07:32.662 Number of LBA Formats: 8 00:07:32.662 Current LBA Format: LBA Format #04 00:07:32.662 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.662 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.662 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.662 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.662 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.662 LBA Forma[2024-11-28 04:57:01.788564] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74326 terminated unexpected 00:07:32.662 t #05: Data Size: 4096 Metadata Size: 8 00:07:32.662 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.662 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.662 00:07:32.662 NVM Specific Namespace Data 00:07:32.662 =========================== 00:07:32.662 Logical Block Storage Tag Mask: 0 00:07:32.662 Protection Information Capabilities: 00:07:32.662 16b Guard Protection Information Storage Tag Support: No 00:07:32.663 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.663 Storage Tag Check Read Support: No 00:07:32.663 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.663 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.663 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.663 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.663 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.663 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.663 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.663 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.663 ===================================================== 00:07:32.663 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:32.663 ===================================================== 00:07:32.663 Controller Capabilities/Features 00:07:32.663 ================================ 00:07:32.663 Vendor ID: 1b36 00:07:32.663 Subsystem Vendor ID: 1af4 00:07:32.663 Serial Number: 12342 00:07:32.663 Model Number: QEMU NVMe Ctrl 00:07:32.663 Firmware Version: 8.0.0 00:07:32.663 Recommended Arb Burst: 6 00:07:32.663 IEEE OUI Identifier: 00 54 52 00:07:32.663 Multi-path I/O 00:07:32.663 May have multiple subsystem ports: No 00:07:32.663 May have multiple controllers: No 00:07:32.663 Associated with SR-IOV VF: No 00:07:32.663 Max Data Transfer Size: 524288 00:07:32.663 Max Number of Namespaces: 256 00:07:32.663 Max Number of I/O Queues: 64 00:07:32.663 NVMe Specification Version (VS): 1.4 00:07:32.663 NVMe Specification Version (Identify): 1.4 00:07:32.663 Maximum Queue Entries: 2048 00:07:32.663 Contiguous Queues Required: Yes 00:07:32.663 Arbitration Mechanisms Supported 00:07:32.663 Weighted Round Robin: Not Supported 00:07:32.663 Vendor Specific: Not Supported 00:07:32.663 Reset Timeout: 7500 ms 00:07:32.663 Doorbell Stride: 4 bytes 00:07:32.663 NVM Subsystem Reset: Not Supported 00:07:32.663 Command Sets Supported 00:07:32.663 NVM Command Set: Supported 00:07:32.663 Boot Partition: Not Supported 00:07:32.663 Memory Page Size Minimum: 4096 bytes 00:07:32.663 Memory Page Size Maximum: 65536 bytes 00:07:32.663 Persistent Memory Region: Not Supported 00:07:32.663 Optional Asynchronous Events Supported 00:07:32.663 Namespace Attribute Notices: Supported 00:07:32.663 Firmware Activation Notices: Not Supported 00:07:32.663 ANA Change Notices: Not Supported 00:07:32.663 PLE Aggregate Log Change Notices: Not Supported 00:07:32.663 LBA Status Info Alert Notices: Not Supported 00:07:32.663 EGE Aggregate Log Change Notices: Not Supported 00:07:32.663 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.663 Zone Descriptor Change Notices: Not Supported 00:07:32.663 Discovery Log Change Notices: Not Supported 00:07:32.663 Controller Attributes 00:07:32.663 128-bit Host Identifier: Not Supported 00:07:32.663 Non-Operational Permissive Mode: Not Supported 00:07:32.663 NVM Sets: Not Supported 00:07:32.663 Read Recovery Levels: Not Supported 00:07:32.663 Endurance Groups: Not Supported 00:07:32.663 Predictable Latency Mode: Not Supported 00:07:32.663 Traffic Based Keep ALive: Not Supported 00:07:32.663 Namespace Granularity: Not Supported 00:07:32.663 SQ Associations: Not Supported 00:07:32.663 UUID List: Not Supported 00:07:32.663 Multi-Domain Subsystem: Not Supported 00:07:32.663 Fixed Capacity Management: Not Supported 00:07:32.663 Variable Capacity Management: Not Supported 00:07:32.663 Delete Endurance Group: Not Supported 00:07:32.663 Delete NVM Set: Not Supported 00:07:32.663 Extended LBA Formats Supported: Supported 00:07:32.663 Flexible Data Placement Supported: Not Supported 00:07:32.663 00:07:32.663 Controller Memory Buffer Support 00:07:32.663 ================================ 00:07:32.663 Supported: No 00:07:32.663 00:07:32.663 Persistent Memory Region Support 00:07:32.663 ================================ 00:07:32.663 Supported: No 00:07:32.663 00:07:32.663 Admin Command Set Attributes 00:07:32.663 ============================ 00:07:32.663 Security Send/Receive: Not Supported 00:07:32.663 Format NVM: Supported 00:07:32.663 Firmware Activate/Download: Not Supported 00:07:32.663 Namespace Management: Supported 00:07:32.663 Device Self-Test: Not Supported 00:07:32.663 Directives: Supported 00:07:32.663 NVMe-MI: Not Supported 00:07:32.663 Virtualization Management: Not Supported 00:07:32.663 Doorbell Buffer Config: Supported 00:07:32.663 Get LBA Status Capability: Not Supported 00:07:32.663 Command & Feature Lockdown Capability: Not Supported 00:07:32.663 Abort Command Limit: 4 00:07:32.663 Async Event Request Limit: 4 00:07:32.663 Number of Firmware Slots: N/A 00:07:32.663 Firmware Slot 1 Read-Only: N/A 00:07:32.663 Firmware Activation Without Reset: N/A 00:07:32.663 Multiple Update Detection Support: N/A 00:07:32.663 Firmware Update Granularity: No Information Provided 00:07:32.663 Per-Namespace SMART Log: Yes 00:07:32.663 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.663 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:32.663 Command Effects Log Page: Supported 00:07:32.663 Get Log Page Extended Data: Supported 00:07:32.663 Telemetry Log Pages: Not Supported 00:07:32.663 Persistent Event Log Pages: Not Supported 00:07:32.663 Supported Log Pages Log Page: May Support 00:07:32.663 Commands Supported & Effects Log Page: Not Supported 00:07:32.663 Feature Identifiers & Effects Log Page:May Support 00:07:32.663 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.663 Data Area 4 for Telemetry Log: Not Supported 00:07:32.664 Error Log Page Entries Supported: 1 00:07:32.664 Keep Alive: Not Supported 00:07:32.664 00:07:32.664 NVM Command Set Attributes 00:07:32.664 ========================== 00:07:32.664 Submission Queue Entry Size 00:07:32.664 Max: 64 00:07:32.664 Min: 64 00:07:32.664 Completion Queue Entry Size 00:07:32.664 Max: 16 00:07:32.664 Min: 16 00:07:32.664 Number of Namespaces: 256 00:07:32.664 Compare Command: Supported 00:07:32.664 Write Uncorrectable Command: Not Supported 00:07:32.664 Dataset Management Command: Supported 00:07:32.664 Write Zeroes Command: Supported 00:07:32.664 Set Features Save Field: Supported 00:07:32.664 Reservations: Not Supported 00:07:32.664 Timestamp: Supported 00:07:32.664 Copy: Supported 00:07:32.664 Volatile Write Cache: Present 00:07:32.664 Atomic Write Unit (Normal): 1 00:07:32.664 Atomic Write Unit (PFail): 1 00:07:32.664 Atomic Compare & Write Unit: 1 00:07:32.664 Fused Compare & Write: Not Supported 00:07:32.664 Scatter-Gather List 00:07:32.664 SGL Command Set: Supported 00:07:32.664 SGL Keyed: Not Supported 00:07:32.664 SGL Bit Bucket Descriptor: Not Supported 00:07:32.664 SGL Metadata Pointer: Not Supported 00:07:32.664 Oversized SGL: Not Supported 00:07:32.664 SGL Metadata Address: Not Supported 00:07:32.664 SGL Offset: Not Supported 00:07:32.664 Transport SGL Data Block: Not Supported 00:07:32.664 Replay Protected Memory Block: Not Supported 00:07:32.664 00:07:32.664 Firmware Slot Information 00:07:32.664 ========================= 00:07:32.664 Active slot: 1 00:07:32.664 Slot 1 Firmware Revision: 1.0 00:07:32.664 00:07:32.664 00:07:32.664 Commands Supported and Effects 00:07:32.664 ============================== 00:07:32.664 Admin Commands 00:07:32.664 -------------- 00:07:32.664 Delete I/O Submission Queue (00h): Supported 00:07:32.664 Create I/O Submission Queue (01h): Supported 00:07:32.664 Get Log Page (02h): Supported 00:07:32.664 Delete I/O Completion Queue (04h): Supported 00:07:32.664 Create I/O Completion Queue (05h): Supported 00:07:32.664 Identify (06h): Supported 00:07:32.664 Abort (08h): Supported 00:07:32.664 Set Features (09h): Supported 00:07:32.664 Get Features (0Ah): Supported 00:07:32.664 Asynchronous Event Request (0Ch): Supported 00:07:32.664 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.664 Directive Send (19h): Supported 00:07:32.664 Directive Receive (1Ah): Supported 00:07:32.664 Virtualization Management (1Ch): Supported 00:07:32.664 Doorbell Buffer Config (7Ch): Supported 00:07:32.664 Format NVM (80h): Supported LBA-Change 00:07:32.664 I/O Commands 00:07:32.664 ------------ 00:07:32.664 Flush (00h): Supported LBA-Change 00:07:32.664 Write (01h): Supported LBA-Change 00:07:32.664 Read (02h): Supported 00:07:32.664 Compare (05h): Supported 00:07:32.664 Write Zeroes (08h): Supported LBA-Change 00:07:32.664 Dataset Management (09h): Supported LBA-Change 00:07:32.664 Unknown (0Ch): Supported 00:07:32.664 Unknown (12h): Supported 00:07:32.664 Copy (19h): Supported LBA-Change 00:07:32.664 Unknown (1Dh): Supported LBA-Change 00:07:32.664 00:07:32.664 Error Log 00:07:32.664 ========= 00:07:32.664 00:07:32.664 Arbitration 00:07:32.664 =========== 00:07:32.664 Arbitration Burst: no limit 00:07:32.664 00:07:32.664 Power Management 00:07:32.664 ================ 00:07:32.664 Number of Power States: 1 00:07:32.664 Current Power State: Power State #0 00:07:32.664 Power State #0: 00:07:32.664 Max Power: 25.00 W 00:07:32.664 Non-Operational State: Operational 00:07:32.664 Entry Latency: 16 microseconds 00:07:32.664 Exit Latency: 4 microseconds 00:07:32.664 Relative Read Throughput: 0 00:07:32.664 Relative Read Latency: 0 00:07:32.664 Relative Write Throughput: 0 00:07:32.664 Relative Write Latency: 0 00:07:32.664 Idle Power: Not Reported 00:07:32.664 Active Power: Not Reported 00:07:32.664 Non-Operational Permissive Mode: Not Supported 00:07:32.664 00:07:32.664 Health Information 00:07:32.664 ================== 00:07:32.664 Critical Warnings: 00:07:32.664 Available Spare Space: OK 00:07:32.664 Temperature: OK 00:07:32.664 Device Reliability: OK 00:07:32.664 Read Only: No 00:07:32.664 Volatile Memory Backup: OK 00:07:32.664 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.664 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.664 Available Spare: 0% 00:07:32.664 Available Spare Threshold: 0% 00:07:32.664 Life Percentage Used: 0% 00:07:32.664 Data Units Read: 2212 00:07:32.664 Data Units Written: 2000 00:07:32.664 Host Read Commands: 120425 00:07:32.664 Host Write Commands: 118694 00:07:32.664 Controller Busy Time: 0 minutes 00:07:32.664 Power Cycles: 0 00:07:32.664 Power On Hours: 0 hours 00:07:32.664 Unsafe Shutdowns: 0 00:07:32.664 Unrecoverable Media Errors: 0 00:07:32.664 Lifetime Error Log Entries: 0 00:07:32.664 Warning Temperature Time: 0 minutes 00:07:32.664 Critical Temperature Time: 0 minutes 00:07:32.664 00:07:32.664 Number of Queues 00:07:32.664 ================ 00:07:32.664 Number of I/O Submission Queues: 64 00:07:32.664 Number of I/O Completion Queues: 64 00:07:32.664 00:07:32.664 ZNS Specific Controller Data 00:07:32.664 ============================ 00:07:32.664 Zone Append Size Limit: 0 00:07:32.664 00:07:32.664 00:07:32.664 Active Namespaces 00:07:32.664 ================= 00:07:32.664 Namespace ID:1 00:07:32.664 Error Recovery Timeout: Unlimited 00:07:32.664 Command Set Identifier: NVM (00h) 00:07:32.664 Deallocate: Supported 00:07:32.664 Deallocated/Unwritten Error: Supported 00:07:32.664 Deallocated Read Value: All 0x00 00:07:32.664 Deallocate in Write Zeroes: Not Supported 00:07:32.664 Deallocated Guard Field: 0xFFFF 00:07:32.664 Flush: Supported 00:07:32.664 Reservation: Not Supported 00:07:32.664 Namespace Sharing Capabilities: Private 00:07:32.664 Size (in LBAs): 1048576 (4GiB) 00:07:32.664 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.664 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.664 Thin Provisioning: Not Supported 00:07:32.664 Per-NS Atomic Units: No 00:07:32.664 Maximum Single Source Range Length: 128 00:07:32.664 Maximum Copy Length: 128 00:07:32.664 Maximum Source Range Count: 128 00:07:32.664 NGUID/EUI64 Never Reused: No 00:07:32.664 Namespace Write Protected: No 00:07:32.664 Number of LBA Formats: 8 00:07:32.665 Current LBA Format: LBA Format #04 00:07:32.665 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.665 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.665 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.665 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.665 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.665 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.665 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.665 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.665 00:07:32.665 NVM Specific Namespace Data 00:07:32.665 =========================== 00:07:32.665 Logical Block Storage Tag Mask: 0 00:07:32.665 Protection Information Capabilities: 00:07:32.665 16b Guard Protection Information Storage Tag Support: No 00:07:32.665 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.665 Storage Tag Check Read Support: No 00:07:32.665 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Namespace ID:2 00:07:32.665 Error Recovery Timeout: Unlimited 00:07:32.665 Command Set Identifier: NVM (00h) 00:07:32.665 Deallocate: Supported 00:07:32.665 Deallocated/Unwritten Error: Supported 00:07:32.665 Deallocated Read Value: All 0x00 00:07:32.665 Deallocate in Write Zeroes: Not Supported 00:07:32.665 Deallocated Guard Field: 0xFFFF 00:07:32.665 Flush: Supported 00:07:32.665 Reservation: Not Supported 00:07:32.665 Namespace Sharing Capabilities: Private 00:07:32.665 Size (in LBAs): 1048576 (4GiB) 00:07:32.665 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.665 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.665 Thin Provisioning: Not Supported 00:07:32.665 Per-NS Atomic Units: No 00:07:32.665 Maximum Single Source Range Length: 128 00:07:32.665 Maximum Copy Length: 128 00:07:32.665 Maximum Source Range Count: 128 00:07:32.665 NGUID/EUI64 Never Reused: No 00:07:32.665 Namespace Write Protected: No 00:07:32.665 Number of LBA Formats: 8 00:07:32.665 Current LBA Format: LBA Format #04 00:07:32.665 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.665 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.665 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.665 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.665 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.665 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.665 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.665 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.665 00:07:32.665 NVM Specific Namespace Data 00:07:32.665 =========================== 00:07:32.665 Logical Block Storage Tag Mask: 0 00:07:32.665 Protection Information Capabilities: 00:07:32.665 16b Guard Protection Information Storage Tag Support: No 00:07:32.665 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.665 Storage Tag Check Read Support: No 00:07:32.665 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Namespace ID:3 00:07:32.665 Error Recovery Timeout: Unlimited 00:07:32.665 Command Set Identifier: NVM (00h) 00:07:32.665 Deallocate: Supported 00:07:32.665 Deallocated/Unwritten Error: Supported 00:07:32.665 Deallocated Read Value: All 0x00 00:07:32.665 Deallocate in Write Zeroes: Not Supported 00:07:32.665 Deallocated Guard Field: 0xFFFF 00:07:32.665 Flush: Supported 00:07:32.665 Reservation: Not Supported 00:07:32.665 Namespace Sharing Capabilities: Private 00:07:32.665 Size (in LBAs): 1048576 (4GiB) 00:07:32.665 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.665 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.665 Thin Provisioning: Not Supported 00:07:32.665 Per-NS Atomic Units: No 00:07:32.665 Maximum Single Source Range Length: 128 00:07:32.665 Maximum Copy Length: 128 00:07:32.665 Maximum Source Range Count: 128 00:07:32.665 NGUID/EUI64 Never Reused: No 00:07:32.665 Namespace Write Protected: No 00:07:32.665 Number of LBA Formats: 8 00:07:32.665 Current LBA Format: LBA Format #04 00:07:32.665 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.665 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.665 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.665 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.665 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.665 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.665 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.665 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.665 00:07:32.665 NVM Specific Namespace Data 00:07:32.665 =========================== 00:07:32.665 Logical Block Storage Tag Mask: 0 00:07:32.665 Protection Information Capabilities: 00:07:32.665 16b Guard Protection Information Storage Tag Support: No 00:07:32.665 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.665 Storage Tag Check Read Support: No 00:07:32.665 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.665 04:57:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.665 04:57:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:32.925 ===================================================== 00:07:32.925 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:32.925 ===================================================== 00:07:32.925 Controller Capabilities/Features 00:07:32.925 ================================ 00:07:32.925 Vendor ID: 1b36 00:07:32.925 Subsystem Vendor ID: 1af4 00:07:32.925 Serial Number: 12340 00:07:32.925 Model Number: QEMU NVMe Ctrl 00:07:32.925 Firmware Version: 8.0.0 00:07:32.925 Recommended Arb Burst: 6 00:07:32.925 IEEE OUI Identifier: 00 54 52 00:07:32.925 Multi-path I/O 00:07:32.925 May have multiple subsystem ports: No 00:07:32.925 May have multiple controllers: No 00:07:32.925 Associated with SR-IOV VF: No 00:07:32.925 Max Data Transfer Size: 524288 00:07:32.925 Max Number of Namespaces: 256 00:07:32.925 Max Number of I/O Queues: 64 00:07:32.925 NVMe Specification Version (VS): 1.4 00:07:32.925 NVMe Specification Version (Identify): 1.4 00:07:32.925 Maximum Queue Entries: 2048 00:07:32.925 Contiguous Queues Required: Yes 00:07:32.925 Arbitration Mechanisms Supported 00:07:32.925 Weighted Round Robin: Not Supported 00:07:32.925 Vendor Specific: Not Supported 00:07:32.925 Reset Timeout: 7500 ms 00:07:32.925 Doorbell Stride: 4 bytes 00:07:32.925 NVM Subsystem Reset: Not Supported 00:07:32.925 Command Sets Supported 00:07:32.925 NVM Command Set: Supported 00:07:32.925 Boot Partition: Not Supported 00:07:32.925 Memory Page Size Minimum: 4096 bytes 00:07:32.925 Memory Page Size Maximum: 65536 bytes 00:07:32.925 Persistent Memory Region: Not Supported 00:07:32.925 Optional Asynchronous Events Supported 00:07:32.925 Namespace Attribute Notices: Supported 00:07:32.925 Firmware Activation Notices: Not Supported 00:07:32.925 ANA Change Notices: Not Supported 00:07:32.925 PLE Aggregate Log Change Notices: Not Supported 00:07:32.925 LBA Status Info Alert Notices: Not Supported 00:07:32.925 EGE Aggregate Log Change Notices: Not Supported 00:07:32.925 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.925 Zone Descriptor Change Notices: Not Supported 00:07:32.925 Discovery Log Change Notices: Not Supported 00:07:32.925 Controller Attributes 00:07:32.925 128-bit Host Identifier: Not Supported 00:07:32.925 Non-Operational Permissive Mode: Not Supported 00:07:32.925 NVM Sets: Not Supported 00:07:32.925 Read Recovery Levels: Not Supported 00:07:32.925 Endurance Groups: Not Supported 00:07:32.925 Predictable Latency Mode: Not Supported 00:07:32.925 Traffic Based Keep ALive: Not Supported 00:07:32.925 Namespace Granularity: Not Supported 00:07:32.925 SQ Associations: Not Supported 00:07:32.925 UUID List: Not Supported 00:07:32.925 Multi-Domain Subsystem: Not Supported 00:07:32.925 Fixed Capacity Management: Not Supported 00:07:32.925 Variable Capacity Management: Not Supported 00:07:32.925 Delete Endurance Group: Not Supported 00:07:32.926 Delete NVM Set: Not Supported 00:07:32.926 Extended LBA Formats Supported: Supported 00:07:32.926 Flexible Data Placement Supported: Not Supported 00:07:32.926 00:07:32.926 Controller Memory Buffer Support 00:07:32.926 ================================ 00:07:32.926 Supported: No 00:07:32.926 00:07:32.926 Persistent Memory Region Support 00:07:32.926 ================================ 00:07:32.926 Supported: No 00:07:32.926 00:07:32.926 Admin Command Set Attributes 00:07:32.926 ============================ 00:07:32.926 Security Send/Receive: Not Supported 00:07:32.926 Format NVM: Supported 00:07:32.926 Firmware Activate/Download: Not Supported 00:07:32.926 Namespace Management: Supported 00:07:32.926 Device Self-Test: Not Supported 00:07:32.926 Directives: Supported 00:07:32.926 NVMe-MI: Not Supported 00:07:32.926 Virtualization Management: Not Supported 00:07:32.926 Doorbell Buffer Config: Supported 00:07:32.926 Get LBA Status Capability: Not Supported 00:07:32.926 Command & Feature Lockdown Capability: Not Supported 00:07:32.926 Abort Command Limit: 4 00:07:32.926 Async Event Request Limit: 4 00:07:32.926 Number of Firmware Slots: N/A 00:07:32.926 Firmware Slot 1 Read-Only: N/A 00:07:32.926 Firmware Activation Without Reset: N/A 00:07:32.926 Multiple Update Detection Support: N/A 00:07:32.926 Firmware Update Granularity: No Information Provided 00:07:32.926 Per-Namespace SMART Log: Yes 00:07:32.926 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.926 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:32.926 Command Effects Log Page: Supported 00:07:32.926 Get Log Page Extended Data: Supported 00:07:32.926 Telemetry Log Pages: Not Supported 00:07:32.926 Persistent Event Log Pages: Not Supported 00:07:32.926 Supported Log Pages Log Page: May Support 00:07:32.926 Commands Supported & Effects Log Page: Not Supported 00:07:32.926 Feature Identifiers & Effects Log Page:May Support 00:07:32.926 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.926 Data Area 4 for Telemetry Log: Not Supported 00:07:32.926 Error Log Page Entries Supported: 1 00:07:32.926 Keep Alive: Not Supported 00:07:32.926 00:07:32.926 NVM Command Set Attributes 00:07:32.926 ========================== 00:07:32.926 Submission Queue Entry Size 00:07:32.926 Max: 64 00:07:32.926 Min: 64 00:07:32.926 Completion Queue Entry Size 00:07:32.926 Max: 16 00:07:32.926 Min: 16 00:07:32.926 Number of Namespaces: 256 00:07:32.926 Compare Command: Supported 00:07:32.926 Write Uncorrectable Command: Not Supported 00:07:32.926 Dataset Management Command: Supported 00:07:32.926 Write Zeroes Command: Supported 00:07:32.926 Set Features Save Field: Supported 00:07:32.926 Reservations: Not Supported 00:07:32.926 Timestamp: Supported 00:07:32.926 Copy: Supported 00:07:32.926 Volatile Write Cache: Present 00:07:32.926 Atomic Write Unit (Normal): 1 00:07:32.926 Atomic Write Unit (PFail): 1 00:07:32.926 Atomic Compare & Write Unit: 1 00:07:32.926 Fused Compare & Write: Not Supported 00:07:32.926 Scatter-Gather List 00:07:32.926 SGL Command Set: Supported 00:07:32.926 SGL Keyed: Not Supported 00:07:32.926 SGL Bit Bucket Descriptor: Not Supported 00:07:32.926 SGL Metadata Pointer: Not Supported 00:07:32.926 Oversized SGL: Not Supported 00:07:32.926 SGL Metadata Address: Not Supported 00:07:32.926 SGL Offset: Not Supported 00:07:32.926 Transport SGL Data Block: Not Supported 00:07:32.926 Replay Protected Memory Block: Not Supported 00:07:32.926 00:07:32.926 Firmware Slot Information 00:07:32.926 ========================= 00:07:32.926 Active slot: 1 00:07:32.926 Slot 1 Firmware Revision: 1.0 00:07:32.926 00:07:32.926 00:07:32.926 Commands Supported and Effects 00:07:32.926 ============================== 00:07:32.926 Admin Commands 00:07:32.926 -------------- 00:07:32.926 Delete I/O Submission Queue (00h): Supported 00:07:32.926 Create I/O Submission Queue (01h): Supported 00:07:32.926 Get Log Page (02h): Supported 00:07:32.926 Delete I/O Completion Queue (04h): Supported 00:07:32.926 Create I/O Completion Queue (05h): Supported 00:07:32.926 Identify (06h): Supported 00:07:32.926 Abort (08h): Supported 00:07:32.926 Set Features (09h): Supported 00:07:32.926 Get Features (0Ah): Supported 00:07:32.926 Asynchronous Event Request (0Ch): Supported 00:07:32.926 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.926 Directive Send (19h): Supported 00:07:32.926 Directive Receive (1Ah): Supported 00:07:32.926 Virtualization Management (1Ch): Supported 00:07:32.926 Doorbell Buffer Config (7Ch): Supported 00:07:32.926 Format NVM (80h): Supported LBA-Change 00:07:32.926 I/O Commands 00:07:32.926 ------------ 00:07:32.926 Flush (00h): Supported LBA-Change 00:07:32.926 Write (01h): Supported LBA-Change 00:07:32.926 Read (02h): Supported 00:07:32.926 Compare (05h): Supported 00:07:32.926 Write Zeroes (08h): Supported LBA-Change 00:07:32.926 Dataset Management (09h): Supported LBA-Change 00:07:32.926 Unknown (0Ch): Supported 00:07:32.926 Unknown (12h): Supported 00:07:32.926 Copy (19h): Supported LBA-Change 00:07:32.926 Unknown (1Dh): Supported LBA-Change 00:07:32.926 00:07:32.926 Error Log 00:07:32.926 ========= 00:07:32.926 00:07:32.926 Arbitration 00:07:32.926 =========== 00:07:32.926 Arbitration Burst: no limit 00:07:32.926 00:07:32.926 Power Management 00:07:32.926 ================ 00:07:32.926 Number of Power States: 1 00:07:32.926 Current Power State: Power State #0 00:07:32.926 Power State #0: 00:07:32.926 Max Power: 25.00 W 00:07:32.926 Non-Operational State: Operational 00:07:32.926 Entry Latency: 16 microseconds 00:07:32.926 Exit Latency: 4 microseconds 00:07:32.926 Relative Read Throughput: 0 00:07:32.926 Relative Read Latency: 0 00:07:32.926 Relative Write Throughput: 0 00:07:32.926 Relative Write Latency: 0 00:07:32.926 Idle Power: Not Reported 00:07:32.926 Active Power: Not Reported 00:07:32.926 Non-Operational Permissive Mode: Not Supported 00:07:32.926 00:07:32.926 Health Information 00:07:32.926 ================== 00:07:32.926 Critical Warnings: 00:07:32.926 Available Spare Space: OK 00:07:32.926 Temperature: OK 00:07:32.927 Device Reliability: OK 00:07:32.927 Read Only: No 00:07:32.927 Volatile Memory Backup: OK 00:07:32.927 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.927 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.927 Available Spare: 0% 00:07:32.927 Available Spare Threshold: 0% 00:07:32.927 Life Percentage Used: 0% 00:07:32.927 Data Units Read: 695 00:07:32.927 Data Units Written: 623 00:07:32.927 Host Read Commands: 39590 00:07:32.927 Host Write Commands: 39376 00:07:32.927 Controller Busy Time: 0 minutes 00:07:32.927 Power Cycles: 0 00:07:32.927 Power On Hours: 0 hours 00:07:32.927 Unsafe Shutdowns: 0 00:07:32.927 Unrecoverable Media Errors: 0 00:07:32.927 Lifetime Error Log Entries: 0 00:07:32.927 Warning Temperature Time: 0 minutes 00:07:32.927 Critical Temperature Time: 0 minutes 00:07:32.927 00:07:32.927 Number of Queues 00:07:32.927 ================ 00:07:32.927 Number of I/O Submission Queues: 64 00:07:32.927 Number of I/O Completion Queues: 64 00:07:32.927 00:07:32.927 ZNS Specific Controller Data 00:07:32.927 ============================ 00:07:32.927 Zone Append Size Limit: 0 00:07:32.927 00:07:32.927 00:07:32.927 Active Namespaces 00:07:32.927 ================= 00:07:32.927 Namespace ID:1 00:07:32.927 Error Recovery Timeout: Unlimited 00:07:32.927 Command Set Identifier: NVM (00h) 00:07:32.927 Deallocate: Supported 00:07:32.927 Deallocated/Unwritten Error: Supported 00:07:32.927 Deallocated Read Value: All 0x00 00:07:32.927 Deallocate in Write Zeroes: Not Supported 00:07:32.927 Deallocated Guard Field: 0xFFFF 00:07:32.927 Flush: Supported 00:07:32.927 Reservation: Not Supported 00:07:32.927 Metadata Transferred as: Separate Metadata Buffer 00:07:32.927 Namespace Sharing Capabilities: Private 00:07:32.927 Size (in LBAs): 1548666 (5GiB) 00:07:32.927 Capacity (in LBAs): 1548666 (5GiB) 00:07:32.927 Utilization (in LBAs): 1548666 (5GiB) 00:07:32.927 Thin Provisioning: Not Supported 00:07:32.927 Per-NS Atomic Units: No 00:07:32.927 Maximum Single Source Range Length: 128 00:07:32.927 Maximum Copy Length: 128 00:07:32.927 Maximum Source Range Count: 128 00:07:32.927 NGUID/EUI64 Never Reused: No 00:07:32.927 Namespace Write Protected: No 00:07:32.927 Number of LBA Formats: 8 00:07:32.927 Current LBA Format: LBA Format #07 00:07:32.927 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.927 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.927 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.927 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.927 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.927 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.927 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.927 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.927 00:07:32.927 NVM Specific Namespace Data 00:07:32.927 =========================== 00:07:32.927 Logical Block Storage Tag Mask: 0 00:07:32.927 Protection Information Capabilities: 00:07:32.927 16b Guard Protection Information Storage Tag Support: No 00:07:32.927 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.927 Storage Tag Check Read Support: No 00:07:32.927 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.927 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.927 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.927 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.927 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.927 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.927 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.927 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.927 04:57:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.927 04:57:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:32.927 ===================================================== 00:07:32.927 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:32.927 ===================================================== 00:07:32.927 Controller Capabilities/Features 00:07:32.927 ================================ 00:07:32.927 Vendor ID: 1b36 00:07:32.927 Subsystem Vendor ID: 1af4 00:07:32.927 Serial Number: 12341 00:07:32.927 Model Number: QEMU NVMe Ctrl 00:07:32.927 Firmware Version: 8.0.0 00:07:32.927 Recommended Arb Burst: 6 00:07:32.927 IEEE OUI Identifier: 00 54 52 00:07:32.927 Multi-path I/O 00:07:32.927 May have multiple subsystem ports: No 00:07:32.927 May have multiple controllers: No 00:07:32.927 Associated with SR-IOV VF: No 00:07:32.927 Max Data Transfer Size: 524288 00:07:32.927 Max Number of Namespaces: 256 00:07:32.927 Max Number of I/O Queues: 64 00:07:32.927 NVMe Specification Version (VS): 1.4 00:07:32.927 NVMe Specification Version (Identify): 1.4 00:07:32.927 Maximum Queue Entries: 2048 00:07:32.927 Contiguous Queues Required: Yes 00:07:32.927 Arbitration Mechanisms Supported 00:07:32.927 Weighted Round Robin: Not Supported 00:07:32.927 Vendor Specific: Not Supported 00:07:32.927 Reset Timeout: 7500 ms 00:07:32.927 Doorbell Stride: 4 bytes 00:07:32.927 NVM Subsystem Reset: Not Supported 00:07:32.927 Command Sets Supported 00:07:32.927 NVM Command Set: Supported 00:07:32.927 Boot Partition: Not Supported 00:07:32.927 Memory Page Size Minimum: 4096 bytes 00:07:32.927 Memory Page Size Maximum: 65536 bytes 00:07:32.927 Persistent Memory Region: Not Supported 00:07:32.927 Optional Asynchronous Events Supported 00:07:32.927 Namespace Attribute Notices: Supported 00:07:32.927 Firmware Activation Notices: Not Supported 00:07:32.927 ANA Change Notices: Not Supported 00:07:32.927 PLE Aggregate Log Change Notices: Not Supported 00:07:32.927 LBA Status Info Alert Notices: Not Supported 00:07:32.927 EGE Aggregate Log Change Notices: Not Supported 00:07:32.927 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.927 Zone Descriptor Change Notices: Not Supported 00:07:32.927 Discovery Log Change Notices: Not Supported 00:07:32.927 Controller Attributes 00:07:32.927 128-bit Host Identifier: Not Supported 00:07:32.927 Non-Operational Permissive Mode: Not Supported 00:07:32.927 NVM Sets: Not Supported 00:07:32.928 Read Recovery Levels: Not Supported 00:07:32.928 Endurance Groups: Not Supported 00:07:32.928 Predictable Latency Mode: Not Supported 00:07:32.928 Traffic Based Keep ALive: Not Supported 00:07:32.928 Namespace Granularity: Not Supported 00:07:32.928 SQ Associations: Not Supported 00:07:32.928 UUID List: Not Supported 00:07:32.928 Multi-Domain Subsystem: Not Supported 00:07:32.928 Fixed Capacity Management: Not Supported 00:07:32.928 Variable Capacity Management: Not Supported 00:07:32.928 Delete Endurance Group: Not Supported 00:07:32.928 Delete NVM Set: Not Supported 00:07:32.928 Extended LBA Formats Supported: Supported 00:07:32.928 Flexible Data Placement Supported: Not Supported 00:07:32.928 00:07:32.928 Controller Memory Buffer Support 00:07:32.928 ================================ 00:07:32.928 Supported: No 00:07:32.928 00:07:32.928 Persistent Memory Region Support 00:07:32.928 ================================ 00:07:32.928 Supported: No 00:07:32.928 00:07:32.928 Admin Command Set Attributes 00:07:32.928 ============================ 00:07:32.928 Security Send/Receive: Not Supported 00:07:32.928 Format NVM: Supported 00:07:32.928 Firmware Activate/Download: Not Supported 00:07:32.928 Namespace Management: Supported 00:07:32.928 Device Self-Test: Not Supported 00:07:32.928 Directives: Supported 00:07:32.928 NVMe-MI: Not Supported 00:07:32.928 Virtualization Management: Not Supported 00:07:32.928 Doorbell Buffer Config: Supported 00:07:32.928 Get LBA Status Capability: Not Supported 00:07:32.928 Command & Feature Lockdown Capability: Not Supported 00:07:32.928 Abort Command Limit: 4 00:07:32.928 Async Event Request Limit: 4 00:07:32.928 Number of Firmware Slots: N/A 00:07:32.928 Firmware Slot 1 Read-Only: N/A 00:07:32.928 Firmware Activation Without Reset: N/A 00:07:32.928 Multiple Update Detection Support: N/A 00:07:32.928 Firmware Update Granularity: No Information Provided 00:07:32.928 Per-Namespace SMART Log: Yes 00:07:32.928 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.928 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:32.928 Command Effects Log Page: Supported 00:07:32.928 Get Log Page Extended Data: Supported 00:07:32.928 Telemetry Log Pages: Not Supported 00:07:32.928 Persistent Event Log Pages: Not Supported 00:07:32.928 Supported Log Pages Log Page: May Support 00:07:32.928 Commands Supported & Effects Log Page: Not Supported 00:07:32.928 Feature Identifiers & Effects Log Page:May Support 00:07:32.928 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.928 Data Area 4 for Telemetry Log: Not Supported 00:07:32.928 Error Log Page Entries Supported: 1 00:07:32.928 Keep Alive: Not Supported 00:07:32.928 00:07:32.928 NVM Command Set Attributes 00:07:32.928 ========================== 00:07:32.928 Submission Queue Entry Size 00:07:32.928 Max: 64 00:07:32.928 Min: 64 00:07:32.928 Completion Queue Entry Size 00:07:32.928 Max: 16 00:07:32.928 Min: 16 00:07:32.928 Number of Namespaces: 256 00:07:32.928 Compare Command: Supported 00:07:32.928 Write Uncorrectable Command: Not Supported 00:07:32.928 Dataset Management Command: Supported 00:07:32.928 Write Zeroes Command: Supported 00:07:32.928 Set Features Save Field: Supported 00:07:32.928 Reservations: Not Supported 00:07:32.928 Timestamp: Supported 00:07:32.928 Copy: Supported 00:07:32.928 Volatile Write Cache: Present 00:07:32.928 Atomic Write Unit (Normal): 1 00:07:32.928 Atomic Write Unit (PFail): 1 00:07:32.928 Atomic Compare & Write Unit: 1 00:07:32.928 Fused Compare & Write: Not Supported 00:07:32.928 Scatter-Gather List 00:07:32.928 SGL Command Set: Supported 00:07:32.928 SGL Keyed: Not Supported 00:07:32.928 SGL Bit Bucket Descriptor: Not Supported 00:07:32.928 SGL Metadata Pointer: Not Supported 00:07:32.928 Oversized SGL: Not Supported 00:07:32.928 SGL Metadata Address: Not Supported 00:07:32.928 SGL Offset: Not Supported 00:07:32.928 Transport SGL Data Block: Not Supported 00:07:32.928 Replay Protected Memory Block: Not Supported 00:07:32.928 00:07:32.928 Firmware Slot Information 00:07:32.928 ========================= 00:07:32.928 Active slot: 1 00:07:32.928 Slot 1 Firmware Revision: 1.0 00:07:32.928 00:07:32.928 00:07:32.928 Commands Supported and Effects 00:07:32.928 ============================== 00:07:32.928 Admin Commands 00:07:32.928 -------------- 00:07:32.928 Delete I/O Submission Queue (00h): Supported 00:07:32.928 Create I/O Submission Queue (01h): Supported 00:07:32.928 Get Log Page (02h): Supported 00:07:32.928 Delete I/O Completion Queue (04h): Supported 00:07:32.928 Create I/O Completion Queue (05h): Supported 00:07:32.928 Identify (06h): Supported 00:07:32.928 Abort (08h): Supported 00:07:32.928 Set Features (09h): Supported 00:07:32.928 Get Features (0Ah): Supported 00:07:32.928 Asynchronous Event Request (0Ch): Supported 00:07:32.928 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.928 Directive Send (19h): Supported 00:07:32.928 Directive Receive (1Ah): Supported 00:07:32.928 Virtualization Management (1Ch): Supported 00:07:32.928 Doorbell Buffer Config (7Ch): Supported 00:07:32.928 Format NVM (80h): Supported LBA-Change 00:07:32.928 I/O Commands 00:07:32.928 ------------ 00:07:32.928 Flush (00h): Supported LBA-Change 00:07:32.928 Write (01h): Supported LBA-Change 00:07:32.928 Read (02h): Supported 00:07:32.928 Compare (05h): Supported 00:07:32.928 Write Zeroes (08h): Supported LBA-Change 00:07:32.928 Dataset Management (09h): Supported LBA-Change 00:07:32.928 Unknown (0Ch): Supported 00:07:32.928 Unknown (12h): Supported 00:07:32.928 Copy (19h): Supported LBA-Change 00:07:32.928 Unknown (1Dh): Supported LBA-Change 00:07:32.928 00:07:32.928 Error Log 00:07:32.928 ========= 00:07:32.928 00:07:32.928 Arbitration 00:07:32.928 =========== 00:07:32.928 Arbitration Burst: no limit 00:07:32.928 00:07:32.928 Power Management 00:07:32.928 ================ 00:07:32.928 Number of Power States: 1 00:07:32.928 Current Power State: Power State #0 00:07:32.928 Power State #0: 00:07:32.928 Max Power: 25.00 W 00:07:32.928 Non-Operational State: Operational 00:07:32.928 Entry Latency: 16 microseconds 00:07:32.928 Exit Latency: 4 microseconds 00:07:32.928 Relative Read Throughput: 0 00:07:32.928 Relative Read Latency: 0 00:07:32.928 Relative Write Throughput: 0 00:07:32.929 Relative Write Latency: 0 00:07:32.929 Idle Power: Not Reported 00:07:32.929 Active Power: Not Reported 00:07:32.929 Non-Operational Permissive Mode: Not Supported 00:07:32.929 00:07:32.929 Health Information 00:07:32.929 ================== 00:07:32.929 Critical Warnings: 00:07:32.929 Available Spare Space: OK 00:07:32.929 Temperature: OK 00:07:32.929 Device Reliability: OK 00:07:32.929 Read Only: No 00:07:32.929 Volatile Memory Backup: OK 00:07:32.929 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.929 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.929 Available Spare: 0% 00:07:32.929 Available Spare Threshold: 0% 00:07:32.929 Life Percentage Used: 0% 00:07:32.929 Data Units Read: 1028 00:07:32.929 Data Units Written: 893 00:07:32.929 Host Read Commands: 56287 00:07:32.929 Host Write Commands: 55048 00:07:32.929 Controller Busy Time: 0 minutes 00:07:32.929 Power Cycles: 0 00:07:32.929 Power On Hours: 0 hours 00:07:32.929 Unsafe Shutdowns: 0 00:07:32.929 Unrecoverable Media Errors: 0 00:07:32.929 Lifetime Error Log Entries: 0 00:07:32.929 Warning Temperature Time: 0 minutes 00:07:32.929 Critical Temperature Time: 0 minutes 00:07:32.929 00:07:32.929 Number of Queues 00:07:32.929 ================ 00:07:32.929 Number of I/O Submission Queues: 64 00:07:32.929 Number of I/O Completion Queues: 64 00:07:32.929 00:07:32.929 ZNS Specific Controller Data 00:07:32.929 ============================ 00:07:32.929 Zone Append Size Limit: 0 00:07:32.929 00:07:32.929 00:07:32.929 Active Namespaces 00:07:32.929 ================= 00:07:32.929 Namespace ID:1 00:07:32.929 Error Recovery Timeout: Unlimited 00:07:32.929 Command Set Identifier: NVM (00h) 00:07:32.929 Deallocate: Supported 00:07:32.929 Deallocated/Unwritten Error: Supported 00:07:32.929 Deallocated Read Value: All 0x00 00:07:32.929 Deallocate in Write Zeroes: Not Supported 00:07:32.929 Deallocated Guard Field: 0xFFFF 00:07:32.929 Flush: Supported 00:07:32.929 Reservation: Not Supported 00:07:32.929 Namespace Sharing Capabilities: Private 00:07:32.929 Size (in LBAs): 1310720 (5GiB) 00:07:32.929 Capacity (in LBAs): 1310720 (5GiB) 00:07:32.929 Utilization (in LBAs): 1310720 (5GiB) 00:07:32.929 Thin Provisioning: Not Supported 00:07:32.929 Per-NS Atomic Units: No 00:07:32.929 Maximum Single Source Range Length: 128 00:07:32.929 Maximum Copy Length: 128 00:07:32.929 Maximum Source Range Count: 128 00:07:32.929 NGUID/EUI64 Never Reused: No 00:07:32.929 Namespace Write Protected: No 00:07:32.929 Number of LBA Formats: 8 00:07:32.929 Current LBA Format: LBA Format #04 00:07:32.929 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.929 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.929 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.929 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.929 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.929 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.929 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.929 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.929 00:07:32.929 NVM Specific Namespace Data 00:07:32.929 =========================== 00:07:32.929 Logical Block Storage Tag Mask: 0 00:07:32.929 Protection Information Capabilities: 00:07:32.929 16b Guard Protection Information Storage Tag Support: No 00:07:32.929 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.929 Storage Tag Check Read Support: No 00:07:32.929 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.929 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.929 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.929 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.929 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.929 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.929 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.929 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.929 04:57:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.929 04:57:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:33.189 ===================================================== 00:07:33.189 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:33.189 ===================================================== 00:07:33.189 Controller Capabilities/Features 00:07:33.189 ================================ 00:07:33.189 Vendor ID: 1b36 00:07:33.189 Subsystem Vendor ID: 1af4 00:07:33.189 Serial Number: 12342 00:07:33.189 Model Number: QEMU NVMe Ctrl 00:07:33.189 Firmware Version: 8.0.0 00:07:33.189 Recommended Arb Burst: 6 00:07:33.189 IEEE OUI Identifier: 00 54 52 00:07:33.189 Multi-path I/O 00:07:33.189 May have multiple subsystem ports: No 00:07:33.189 May have multiple controllers: No 00:07:33.189 Associated with SR-IOV VF: No 00:07:33.189 Max Data Transfer Size: 524288 00:07:33.189 Max Number of Namespaces: 256 00:07:33.189 Max Number of I/O Queues: 64 00:07:33.189 NVMe Specification Version (VS): 1.4 00:07:33.189 NVMe Specification Version (Identify): 1.4 00:07:33.189 Maximum Queue Entries: 2048 00:07:33.189 Contiguous Queues Required: Yes 00:07:33.189 Arbitration Mechanisms Supported 00:07:33.189 Weighted Round Robin: Not Supported 00:07:33.189 Vendor Specific: Not Supported 00:07:33.189 Reset Timeout: 7500 ms 00:07:33.189 Doorbell Stride: 4 bytes 00:07:33.189 NVM Subsystem Reset: Not Supported 00:07:33.189 Command Sets Supported 00:07:33.189 NVM Command Set: Supported 00:07:33.189 Boot Partition: Not Supported 00:07:33.189 Memory Page Size Minimum: 4096 bytes 00:07:33.189 Memory Page Size Maximum: 65536 bytes 00:07:33.189 Persistent Memory Region: Not Supported 00:07:33.189 Optional Asynchronous Events Supported 00:07:33.189 Namespace Attribute Notices: Supported 00:07:33.189 Firmware Activation Notices: Not Supported 00:07:33.189 ANA Change Notices: Not Supported 00:07:33.189 PLE Aggregate Log Change Notices: Not Supported 00:07:33.189 LBA Status Info Alert Notices: Not Supported 00:07:33.189 EGE Aggregate Log Change Notices: Not Supported 00:07:33.189 Normal NVM Subsystem Shutdown event: Not Supported 00:07:33.189 Zone Descriptor Change Notices: Not Supported 00:07:33.189 Discovery Log Change Notices: Not Supported 00:07:33.189 Controller Attributes 00:07:33.189 128-bit Host Identifier: Not Supported 00:07:33.189 Non-Operational Permissive Mode: Not Supported 00:07:33.189 NVM Sets: Not Supported 00:07:33.189 Read Recovery Levels: Not Supported 00:07:33.189 Endurance Groups: Not Supported 00:07:33.189 Predictable Latency Mode: Not Supported 00:07:33.189 Traffic Based Keep ALive: Not Supported 00:07:33.189 Namespace Granularity: Not Supported 00:07:33.189 SQ Associations: Not Supported 00:07:33.189 UUID List: Not Supported 00:07:33.189 Multi-Domain Subsystem: Not Supported 00:07:33.189 Fixed Capacity Management: Not Supported 00:07:33.189 Variable Capacity Management: Not Supported 00:07:33.189 Delete Endurance Group: Not Supported 00:07:33.190 Delete NVM Set: Not Supported 00:07:33.190 Extended LBA Formats Supported: Supported 00:07:33.190 Flexible Data Placement Supported: Not Supported 00:07:33.190 00:07:33.190 Controller Memory Buffer Support 00:07:33.190 ================================ 00:07:33.190 Supported: No 00:07:33.190 00:07:33.190 Persistent Memory Region Support 00:07:33.190 ================================ 00:07:33.190 Supported: No 00:07:33.190 00:07:33.190 Admin Command Set Attributes 00:07:33.190 ============================ 00:07:33.190 Security Send/Receive: Not Supported 00:07:33.190 Format NVM: Supported 00:07:33.190 Firmware Activate/Download: Not Supported 00:07:33.190 Namespace Management: Supported 00:07:33.190 Device Self-Test: Not Supported 00:07:33.190 Directives: Supported 00:07:33.190 NVMe-MI: Not Supported 00:07:33.190 Virtualization Management: Not Supported 00:07:33.190 Doorbell Buffer Config: Supported 00:07:33.190 Get LBA Status Capability: Not Supported 00:07:33.190 Command & Feature Lockdown Capability: Not Supported 00:07:33.190 Abort Command Limit: 4 00:07:33.190 Async Event Request Limit: 4 00:07:33.190 Number of Firmware Slots: N/A 00:07:33.190 Firmware Slot 1 Read-Only: N/A 00:07:33.190 Firmware Activation Without Reset: N/A 00:07:33.190 Multiple Update Detection Support: N/A 00:07:33.190 Firmware Update Granularity: No Information Provided 00:07:33.190 Per-Namespace SMART Log: Yes 00:07:33.190 Asymmetric Namespace Access Log Page: Not Supported 00:07:33.190 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:33.190 Command Effects Log Page: Supported 00:07:33.190 Get Log Page Extended Data: Supported 00:07:33.190 Telemetry Log Pages: Not Supported 00:07:33.190 Persistent Event Log Pages: Not Supported 00:07:33.190 Supported Log Pages Log Page: May Support 00:07:33.190 Commands Supported & Effects Log Page: Not Supported 00:07:33.190 Feature Identifiers & Effects Log Page:May Support 00:07:33.190 NVMe-MI Commands & Effects Log Page: May Support 00:07:33.190 Data Area 4 for Telemetry Log: Not Supported 00:07:33.190 Error Log Page Entries Supported: 1 00:07:33.190 Keep Alive: Not Supported 00:07:33.190 00:07:33.190 NVM Command Set Attributes 00:07:33.190 ========================== 00:07:33.190 Submission Queue Entry Size 00:07:33.190 Max: 64 00:07:33.190 Min: 64 00:07:33.190 Completion Queue Entry Size 00:07:33.190 Max: 16 00:07:33.190 Min: 16 00:07:33.190 Number of Namespaces: 256 00:07:33.190 Compare Command: Supported 00:07:33.190 Write Uncorrectable Command: Not Supported 00:07:33.190 Dataset Management Command: Supported 00:07:33.190 Write Zeroes Command: Supported 00:07:33.190 Set Features Save Field: Supported 00:07:33.190 Reservations: Not Supported 00:07:33.190 Timestamp: Supported 00:07:33.190 Copy: Supported 00:07:33.190 Volatile Write Cache: Present 00:07:33.190 Atomic Write Unit (Normal): 1 00:07:33.190 Atomic Write Unit (PFail): 1 00:07:33.190 Atomic Compare & Write Unit: 1 00:07:33.190 Fused Compare & Write: Not Supported 00:07:33.190 Scatter-Gather List 00:07:33.190 SGL Command Set: Supported 00:07:33.190 SGL Keyed: Not Supported 00:07:33.190 SGL Bit Bucket Descriptor: Not Supported 00:07:33.190 SGL Metadata Pointer: Not Supported 00:07:33.190 Oversized SGL: Not Supported 00:07:33.190 SGL Metadata Address: Not Supported 00:07:33.190 SGL Offset: Not Supported 00:07:33.190 Transport SGL Data Block: Not Supported 00:07:33.190 Replay Protected Memory Block: Not Supported 00:07:33.190 00:07:33.190 Firmware Slot Information 00:07:33.190 ========================= 00:07:33.190 Active slot: 1 00:07:33.190 Slot 1 Firmware Revision: 1.0 00:07:33.190 00:07:33.190 00:07:33.190 Commands Supported and Effects 00:07:33.190 ============================== 00:07:33.190 Admin Commands 00:07:33.190 -------------- 00:07:33.190 Delete I/O Submission Queue (00h): Supported 00:07:33.190 Create I/O Submission Queue (01h): Supported 00:07:33.190 Get Log Page (02h): Supported 00:07:33.190 Delete I/O Completion Queue (04h): Supported 00:07:33.190 Create I/O Completion Queue (05h): Supported 00:07:33.190 Identify (06h): Supported 00:07:33.190 Abort (08h): Supported 00:07:33.190 Set Features (09h): Supported 00:07:33.190 Get Features (0Ah): Supported 00:07:33.190 Asynchronous Event Request (0Ch): Supported 00:07:33.190 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:33.190 Directive Send (19h): Supported 00:07:33.190 Directive Receive (1Ah): Supported 00:07:33.190 Virtualization Management (1Ch): Supported 00:07:33.190 Doorbell Buffer Config (7Ch): Supported 00:07:33.190 Format NVM (80h): Supported LBA-Change 00:07:33.190 I/O Commands 00:07:33.190 ------------ 00:07:33.190 Flush (00h): Supported LBA-Change 00:07:33.190 Write (01h): Supported LBA-Change 00:07:33.190 Read (02h): Supported 00:07:33.190 Compare (05h): Supported 00:07:33.190 Write Zeroes (08h): Supported LBA-Change 00:07:33.190 Dataset Management (09h): Supported LBA-Change 00:07:33.190 Unknown (0Ch): Supported 00:07:33.190 Unknown (12h): Supported 00:07:33.190 Copy (19h): Supported LBA-Change 00:07:33.190 Unknown (1Dh): Supported LBA-Change 00:07:33.190 00:07:33.190 Error Log 00:07:33.190 ========= 00:07:33.190 00:07:33.190 Arbitration 00:07:33.190 =========== 00:07:33.190 Arbitration Burst: no limit 00:07:33.190 00:07:33.190 Power Management 00:07:33.190 ================ 00:07:33.190 Number of Power States: 1 00:07:33.190 Current Power State: Power State #0 00:07:33.190 Power State #0: 00:07:33.190 Max Power: 25.00 W 00:07:33.190 Non-Operational State: Operational 00:07:33.190 Entry Latency: 16 microseconds 00:07:33.190 Exit Latency: 4 microseconds 00:07:33.190 Relative Read Throughput: 0 00:07:33.190 Relative Read Latency: 0 00:07:33.190 Relative Write Throughput: 0 00:07:33.190 Relative Write Latency: 0 00:07:33.190 Idle Power: Not Reported 00:07:33.190 Active Power: Not Reported 00:07:33.190 Non-Operational Permissive Mode: Not Supported 00:07:33.190 00:07:33.190 Health Information 00:07:33.191 ================== 00:07:33.191 Critical Warnings: 00:07:33.191 Available Spare Space: OK 00:07:33.191 Temperature: OK 00:07:33.191 Device Reliability: OK 00:07:33.191 Read Only: No 00:07:33.191 Volatile Memory Backup: OK 00:07:33.191 Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.191 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:33.191 Available Spare: 0% 00:07:33.191 Available Spare Threshold: 0% 00:07:33.191 Life Percentage Used: 0% 00:07:33.191 Data Units Read: 2212 00:07:33.191 Data Units Written: 2000 00:07:33.191 Host Read Commands: 120425 00:07:33.191 Host Write Commands: 118694 00:07:33.191 Controller Busy Time: 0 minutes 00:07:33.191 Power Cycles: 0 00:07:33.191 Power On Hours: 0 hours 00:07:33.191 Unsafe Shutdowns: 0 00:07:33.191 Unrecoverable Media Errors: 0 00:07:33.191 Lifetime Error Log Entries: 0 00:07:33.191 Warning Temperature Time: 0 minutes 00:07:33.191 Critical Temperature Time: 0 minutes 00:07:33.191 00:07:33.191 Number of Queues 00:07:33.191 ================ 00:07:33.191 Number of I/O Submission Queues: 64 00:07:33.191 Number of I/O Completion Queues: 64 00:07:33.191 00:07:33.191 ZNS Specific Controller Data 00:07:33.191 ============================ 00:07:33.191 Zone Append Size Limit: 0 00:07:33.191 00:07:33.191 00:07:33.191 Active Namespaces 00:07:33.191 ================= 00:07:33.191 Namespace ID:1 00:07:33.191 Error Recovery Timeout: Unlimited 00:07:33.191 Command Set Identifier: NVM (00h) 00:07:33.191 Deallocate: Supported 00:07:33.191 Deallocated/Unwritten Error: Supported 00:07:33.191 Deallocated Read Value: All 0x00 00:07:33.191 Deallocate in Write Zeroes: Not Supported 00:07:33.191 Deallocated Guard Field: 0xFFFF 00:07:33.191 Flush: Supported 00:07:33.191 Reservation: Not Supported 00:07:33.191 Namespace Sharing Capabilities: Private 00:07:33.191 Size (in LBAs): 1048576 (4GiB) 00:07:33.191 Capacity (in LBAs): 1048576 (4GiB) 00:07:33.191 Utilization (in LBAs): 1048576 (4GiB) 00:07:33.191 Thin Provisioning: Not Supported 00:07:33.191 Per-NS Atomic Units: No 00:07:33.191 Maximum Single Source Range Length: 128 00:07:33.191 Maximum Copy Length: 128 00:07:33.191 Maximum Source Range Count: 128 00:07:33.191 NGUID/EUI64 Never Reused: No 00:07:33.191 Namespace Write Protected: No 00:07:33.191 Number of LBA Formats: 8 00:07:33.191 Current LBA Format: LBA Format #04 00:07:33.191 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.191 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.191 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.191 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.191 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.191 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.191 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.191 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.191 00:07:33.191 NVM Specific Namespace Data 00:07:33.191 =========================== 00:07:33.191 Logical Block Storage Tag Mask: 0 00:07:33.191 Protection Information Capabilities: 00:07:33.191 16b Guard Protection Information Storage Tag Support: No 00:07:33.191 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.191 Storage Tag Check Read Support: No 00:07:33.191 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Namespace ID:2 00:07:33.191 Error Recovery Timeout: Unlimited 00:07:33.191 Command Set Identifier: NVM (00h) 00:07:33.191 Deallocate: Supported 00:07:33.191 Deallocated/Unwritten Error: Supported 00:07:33.191 Deallocated Read Value: All 0x00 00:07:33.191 Deallocate in Write Zeroes: Not Supported 00:07:33.191 Deallocated Guard Field: 0xFFFF 00:07:33.191 Flush: Supported 00:07:33.191 Reservation: Not Supported 00:07:33.191 Namespace Sharing Capabilities: Private 00:07:33.191 Size (in LBAs): 1048576 (4GiB) 00:07:33.191 Capacity (in LBAs): 1048576 (4GiB) 00:07:33.191 Utilization (in LBAs): 1048576 (4GiB) 00:07:33.191 Thin Provisioning: Not Supported 00:07:33.191 Per-NS Atomic Units: No 00:07:33.191 Maximum Single Source Range Length: 128 00:07:33.191 Maximum Copy Length: 128 00:07:33.191 Maximum Source Range Count: 128 00:07:33.191 NGUID/EUI64 Never Reused: No 00:07:33.191 Namespace Write Protected: No 00:07:33.191 Number of LBA Formats: 8 00:07:33.191 Current LBA Format: LBA Format #04 00:07:33.191 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.191 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.191 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.191 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.191 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.191 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.191 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.191 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.191 00:07:33.191 NVM Specific Namespace Data 00:07:33.191 =========================== 00:07:33.191 Logical Block Storage Tag Mask: 0 00:07:33.191 Protection Information Capabilities: 00:07:33.191 16b Guard Protection Information Storage Tag Support: No 00:07:33.191 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.191 Storage Tag Check Read Support: No 00:07:33.191 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.191 Namespace ID:3 00:07:33.191 Error Recovery Timeout: Unlimited 00:07:33.191 Command Set Identifier: NVM (00h) 00:07:33.191 Deallocate: Supported 00:07:33.191 Deallocated/Unwritten Error: Supported 00:07:33.191 Deallocated Read Value: All 0x00 00:07:33.191 Deallocate in Write Zeroes: Not Supported 00:07:33.191 Deallocated Guard Field: 0xFFFF 00:07:33.191 Flush: Supported 00:07:33.191 Reservation: Not Supported 00:07:33.191 Namespace Sharing Capabilities: Private 00:07:33.191 Size (in LBAs): 1048576 (4GiB) 00:07:33.191 Capacity (in LBAs): 1048576 (4GiB) 00:07:33.191 Utilization (in LBAs): 1048576 (4GiB) 00:07:33.192 Thin Provisioning: Not Supported 00:07:33.192 Per-NS Atomic Units: No 00:07:33.192 Maximum Single Source Range Length: 128 00:07:33.192 Maximum Copy Length: 128 00:07:33.192 Maximum Source Range Count: 128 00:07:33.192 NGUID/EUI64 Never Reused: No 00:07:33.192 Namespace Write Protected: No 00:07:33.192 Number of LBA Formats: 8 00:07:33.192 Current LBA Format: LBA Format #04 00:07:33.192 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.192 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.192 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.192 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.192 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.192 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.192 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.192 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.192 00:07:33.192 NVM Specific Namespace Data 00:07:33.192 =========================== 00:07:33.192 Logical Block Storage Tag Mask: 0 00:07:33.192 Protection Information Capabilities: 00:07:33.192 16b Guard Protection Information Storage Tag Support: No 00:07:33.192 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.192 Storage Tag Check Read Support: No 00:07:33.192 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.192 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.192 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.192 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.192 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.192 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.192 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.192 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.192 04:57:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:33.192 04:57:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:33.451 ===================================================== 00:07:33.451 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:33.451 ===================================================== 00:07:33.451 Controller Capabilities/Features 00:07:33.451 ================================ 00:07:33.451 Vendor ID: 1b36 00:07:33.451 Subsystem Vendor ID: 1af4 00:07:33.451 Serial Number: 12343 00:07:33.451 Model Number: QEMU NVMe Ctrl 00:07:33.451 Firmware Version: 8.0.0 00:07:33.451 Recommended Arb Burst: 6 00:07:33.451 IEEE OUI Identifier: 00 54 52 00:07:33.452 Multi-path I/O 00:07:33.452 May have multiple subsystem ports: No 00:07:33.452 May have multiple controllers: Yes 00:07:33.452 Associated with SR-IOV VF: No 00:07:33.452 Max Data Transfer Size: 524288 00:07:33.452 Max Number of Namespaces: 256 00:07:33.452 Max Number of I/O Queues: 64 00:07:33.452 NVMe Specification Version (VS): 1.4 00:07:33.452 NVMe Specification Version (Identify): 1.4 00:07:33.452 Maximum Queue Entries: 2048 00:07:33.452 Contiguous Queues Required: Yes 00:07:33.452 Arbitration Mechanisms Supported 00:07:33.452 Weighted Round Robin: Not Supported 00:07:33.452 Vendor Specific: Not Supported 00:07:33.452 Reset Timeout: 7500 ms 00:07:33.452 Doorbell Stride: 4 bytes 00:07:33.452 NVM Subsystem Reset: Not Supported 00:07:33.452 Command Sets Supported 00:07:33.452 NVM Command Set: Supported 00:07:33.452 Boot Partition: Not Supported 00:07:33.452 Memory Page Size Minimum: 4096 bytes 00:07:33.452 Memory Page Size Maximum: 65536 bytes 00:07:33.452 Persistent Memory Region: Not Supported 00:07:33.452 Optional Asynchronous Events Supported 00:07:33.452 Namespace Attribute Notices: Supported 00:07:33.452 Firmware Activation Notices: Not Supported 00:07:33.452 ANA Change Notices: Not Supported 00:07:33.452 PLE Aggregate Log Change Notices: Not Supported 00:07:33.452 LBA Status Info Alert Notices: Not Supported 00:07:33.452 EGE Aggregate Log Change Notices: Not Supported 00:07:33.452 Normal NVM Subsystem Shutdown event: Not Supported 00:07:33.452 Zone Descriptor Change Notices: Not Supported 00:07:33.452 Discovery Log Change Notices: Not Supported 00:07:33.452 Controller Attributes 00:07:33.452 128-bit Host Identifier: Not Supported 00:07:33.452 Non-Operational Permissive Mode: Not Supported 00:07:33.452 NVM Sets: Not Supported 00:07:33.452 Read Recovery Levels: Not Supported 00:07:33.452 Endurance Groups: Supported 00:07:33.452 Predictable Latency Mode: Not Supported 00:07:33.452 Traffic Based Keep ALive: Not Supported 00:07:33.452 Namespace Granularity: Not Supported 00:07:33.452 SQ Associations: Not Supported 00:07:33.452 UUID List: Not Supported 00:07:33.452 Multi-Domain Subsystem: Not Supported 00:07:33.452 Fixed Capacity Management: Not Supported 00:07:33.452 Variable Capacity Management: Not Supported 00:07:33.452 Delete Endurance Group: Not Supported 00:07:33.452 Delete NVM Set: Not Supported 00:07:33.452 Extended LBA Formats Supported: Supported 00:07:33.452 Flexible Data Placement Supported: Supported 00:07:33.452 00:07:33.452 Controller Memory Buffer Support 00:07:33.452 ================================ 00:07:33.452 Supported: No 00:07:33.452 00:07:33.452 Persistent Memory Region Support 00:07:33.452 ================================ 00:07:33.452 Supported: No 00:07:33.452 00:07:33.452 Admin Command Set Attributes 00:07:33.452 ============================ 00:07:33.452 Security Send/Receive: Not Supported 00:07:33.452 Format NVM: Supported 00:07:33.452 Firmware Activate/Download: Not Supported 00:07:33.452 Namespace Management: Supported 00:07:33.452 Device Self-Test: Not Supported 00:07:33.452 Directives: Supported 00:07:33.452 NVMe-MI: Not Supported 00:07:33.452 Virtualization Management: Not Supported 00:07:33.452 Doorbell Buffer Config: Supported 00:07:33.452 Get LBA Status Capability: Not Supported 00:07:33.452 Command & Feature Lockdown Capability: Not Supported 00:07:33.452 Abort Command Limit: 4 00:07:33.452 Async Event Request Limit: 4 00:07:33.452 Number of Firmware Slots: N/A 00:07:33.452 Firmware Slot 1 Read-Only: N/A 00:07:33.452 Firmware Activation Without Reset: N/A 00:07:33.452 Multiple Update Detection Support: N/A 00:07:33.452 Firmware Update Granularity: No Information Provided 00:07:33.452 Per-Namespace SMART Log: Yes 00:07:33.452 Asymmetric Namespace Access Log Page: Not Supported 00:07:33.452 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:33.452 Command Effects Log Page: Supported 00:07:33.452 Get Log Page Extended Data: Supported 00:07:33.452 Telemetry Log Pages: Not Supported 00:07:33.452 Persistent Event Log Pages: Not Supported 00:07:33.452 Supported Log Pages Log Page: May Support 00:07:33.452 Commands Supported & Effects Log Page: Not Supported 00:07:33.452 Feature Identifiers & Effects Log Page:May Support 00:07:33.452 NVMe-MI Commands & Effects Log Page: May Support 00:07:33.452 Data Area 4 for Telemetry Log: Not Supported 00:07:33.452 Error Log Page Entries Supported: 1 00:07:33.452 Keep Alive: Not Supported 00:07:33.452 00:07:33.452 NVM Command Set Attributes 00:07:33.452 ========================== 00:07:33.452 Submission Queue Entry Size 00:07:33.452 Max: 64 00:07:33.452 Min: 64 00:07:33.452 Completion Queue Entry Size 00:07:33.452 Max: 16 00:07:33.452 Min: 16 00:07:33.452 Number of Namespaces: 256 00:07:33.452 Compare Command: Supported 00:07:33.452 Write Uncorrectable Command: Not Supported 00:07:33.452 Dataset Management Command: Supported 00:07:33.452 Write Zeroes Command: Supported 00:07:33.452 Set Features Save Field: Supported 00:07:33.452 Reservations: Not Supported 00:07:33.452 Timestamp: Supported 00:07:33.452 Copy: Supported 00:07:33.452 Volatile Write Cache: Present 00:07:33.452 Atomic Write Unit (Normal): 1 00:07:33.452 Atomic Write Unit (PFail): 1 00:07:33.452 Atomic Compare & Write Unit: 1 00:07:33.452 Fused Compare & Write: Not Supported 00:07:33.452 Scatter-Gather List 00:07:33.452 SGL Command Set: Supported 00:07:33.452 SGL Keyed: Not Supported 00:07:33.452 SGL Bit Bucket Descriptor: Not Supported 00:07:33.452 SGL Metadata Pointer: Not Supported 00:07:33.452 Oversized SGL: Not Supported 00:07:33.452 SGL Metadata Address: Not Supported 00:07:33.452 SGL Offset: Not Supported 00:07:33.452 Transport SGL Data Block: Not Supported 00:07:33.452 Replay Protected Memory Block: Not Supported 00:07:33.452 00:07:33.452 Firmware Slot Information 00:07:33.452 ========================= 00:07:33.452 Active slot: 1 00:07:33.452 Slot 1 Firmware Revision: 1.0 00:07:33.452 00:07:33.452 00:07:33.452 Commands Supported and Effects 00:07:33.452 ============================== 00:07:33.452 Admin Commands 00:07:33.452 -------------- 00:07:33.452 Delete I/O Submission Queue (00h): Supported 00:07:33.452 Create I/O Submission Queue (01h): Supported 00:07:33.452 Get Log Page (02h): Supported 00:07:33.452 Delete I/O Completion Queue (04h): Supported 00:07:33.452 Create I/O Completion Queue (05h): Supported 00:07:33.452 Identify (06h): Supported 00:07:33.452 Abort (08h): Supported 00:07:33.452 Set Features (09h): Supported 00:07:33.452 Get Features (0Ah): Supported 00:07:33.453 Asynchronous Event Request (0Ch): Supported 00:07:33.453 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:33.453 Directive Send (19h): Supported 00:07:33.453 Directive Receive (1Ah): Supported 00:07:33.453 Virtualization Management (1Ch): Supported 00:07:33.453 Doorbell Buffer Config (7Ch): Supported 00:07:33.453 Format NVM (80h): Supported LBA-Change 00:07:33.453 I/O Commands 00:07:33.453 ------------ 00:07:33.453 Flush (00h): Supported LBA-Change 00:07:33.453 Write (01h): Supported LBA-Change 00:07:33.453 Read (02h): Supported 00:07:33.453 Compare (05h): Supported 00:07:33.453 Write Zeroes (08h): Supported LBA-Change 00:07:33.453 Dataset Management (09h): Supported LBA-Change 00:07:33.453 Unknown (0Ch): Supported 00:07:33.453 Unknown (12h): Supported 00:07:33.453 Copy (19h): Supported LBA-Change 00:07:33.453 Unknown (1Dh): Supported LBA-Change 00:07:33.453 00:07:33.453 Error Log 00:07:33.453 ========= 00:07:33.453 00:07:33.453 Arbitration 00:07:33.453 =========== 00:07:33.453 Arbitration Burst: no limit 00:07:33.453 00:07:33.453 Power Management 00:07:33.453 ================ 00:07:33.453 Number of Power States: 1 00:07:33.453 Current Power State: Power State #0 00:07:33.453 Power State #0: 00:07:33.453 Max Power: 25.00 W 00:07:33.453 Non-Operational State: Operational 00:07:33.453 Entry Latency: 16 microseconds 00:07:33.453 Exit Latency: 4 microseconds 00:07:33.453 Relative Read Throughput: 0 00:07:33.453 Relative Read Latency: 0 00:07:33.453 Relative Write Throughput: 0 00:07:33.453 Relative Write Latency: 0 00:07:33.453 Idle Power: Not Reported 00:07:33.453 Active Power: Not Reported 00:07:33.453 Non-Operational Permissive Mode: Not Supported 00:07:33.453 00:07:33.453 Health Information 00:07:33.453 ================== 00:07:33.453 Critical Warnings: 00:07:33.453 Available Spare Space: OK 00:07:33.453 Temperature: OK 00:07:33.453 Device Reliability: OK 00:07:33.453 Read Only: No 00:07:33.453 Volatile Memory Backup: OK 00:07:33.453 Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.453 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:33.453 Available Spare: 0% 00:07:33.453 Available Spare Threshold: 0% 00:07:33.453 Life Percentage Used: 0% 00:07:33.453 Data Units Read: 841 00:07:33.453 Data Units Written: 770 00:07:33.453 Host Read Commands: 40964 00:07:33.453 Host Write Commands: 40387 00:07:33.453 Controller Busy Time: 0 minutes 00:07:33.453 Power Cycles: 0 00:07:33.453 Power On Hours: 0 hours 00:07:33.453 Unsafe Shutdowns: 0 00:07:33.453 Unrecoverable Media Errors: 0 00:07:33.453 Lifetime Error Log Entries: 0 00:07:33.453 Warning Temperature Time: 0 minutes 00:07:33.453 Critical Temperature Time: 0 minutes 00:07:33.453 00:07:33.453 Number of Queues 00:07:33.453 ================ 00:07:33.453 Number of I/O Submission Queues: 64 00:07:33.453 Number of I/O Completion Queues: 64 00:07:33.453 00:07:33.453 ZNS Specific Controller Data 00:07:33.453 ============================ 00:07:33.453 Zone Append Size Limit: 0 00:07:33.453 00:07:33.453 00:07:33.453 Active Namespaces 00:07:33.453 ================= 00:07:33.453 Namespace ID:1 00:07:33.453 Error Recovery Timeout: Unlimited 00:07:33.453 Command Set Identifier: NVM (00h) 00:07:33.453 Deallocate: Supported 00:07:33.453 Deallocated/Unwritten Error: Supported 00:07:33.453 Deallocated Read Value: All 0x00 00:07:33.453 Deallocate in Write Zeroes: Not Supported 00:07:33.453 Deallocated Guard Field: 0xFFFF 00:07:33.453 Flush: Supported 00:07:33.453 Reservation: Not Supported 00:07:33.453 Namespace Sharing Capabilities: Multiple Controllers 00:07:33.453 Size (in LBAs): 262144 (1GiB) 00:07:33.453 Capacity (in LBAs): 262144 (1GiB) 00:07:33.453 Utilization (in LBAs): 262144 (1GiB) 00:07:33.453 Thin Provisioning: Not Supported 00:07:33.453 Per-NS Atomic Units: No 00:07:33.453 Maximum Single Source Range Length: 128 00:07:33.453 Maximum Copy Length: 128 00:07:33.453 Maximum Source Range Count: 128 00:07:33.453 NGUID/EUI64 Never Reused: No 00:07:33.453 Namespace Write Protected: No 00:07:33.453 Endurance group ID: 1 00:07:33.453 Number of LBA Formats: 8 00:07:33.453 Current LBA Format: LBA Format #04 00:07:33.453 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.453 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.453 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.453 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.453 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.453 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.453 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.453 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.453 00:07:33.453 Get Feature FDP: 00:07:33.453 ================ 00:07:33.453 Enabled: Yes 00:07:33.453 FDP configuration index: 0 00:07:33.453 00:07:33.453 FDP configurations log page 00:07:33.453 =========================== 00:07:33.453 Number of FDP configurations: 1 00:07:33.453 Version: 0 00:07:33.453 Size: 112 00:07:33.453 FDP Configuration Descriptor: 0 00:07:33.453 Descriptor Size: 96 00:07:33.453 Reclaim Group Identifier format: 2 00:07:33.453 FDP Volatile Write Cache: Not Present 00:07:33.453 FDP Configuration: Valid 00:07:33.453 Vendor Specific Size: 0 00:07:33.453 Number of Reclaim Groups: 2 00:07:33.453 Number of Recalim Unit Handles: 8 00:07:33.453 Max Placement Identifiers: 128 00:07:33.453 Number of Namespaces Suppprted: 256 00:07:33.453 Reclaim unit Nominal Size: 6000000 bytes 00:07:33.453 Estimated Reclaim Unit Time Limit: Not Reported 00:07:33.453 RUH Desc #000: RUH Type: Initially Isolated 00:07:33.453 RUH Desc #001: RUH Type: Initially Isolated 00:07:33.453 RUH Desc #002: RUH Type: Initially Isolated 00:07:33.453 RUH Desc #003: RUH Type: Initially Isolated 00:07:33.453 RUH Desc #004: RUH Type: Initially Isolated 00:07:33.453 RUH Desc #005: RUH Type: Initially Isolated 00:07:33.453 RUH Desc #006: RUH Type: Initially Isolated 00:07:33.453 RUH Desc #007: RUH Type: Initially Isolated 00:07:33.453 00:07:33.453 FDP reclaim unit handle usage log page 00:07:33.453 ====================================== 00:07:33.453 Number of Reclaim Unit Handles: 8 00:07:33.453 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:33.454 RUH Usage Desc #001: RUH Attributes: Unused 00:07:33.454 RUH Usage Desc #002: RUH Attributes: Unused 00:07:33.454 RUH Usage Desc #003: RUH Attributes: Unused 00:07:33.454 RUH Usage Desc #004: RUH Attributes: Unused 00:07:33.454 RUH Usage Desc #005: RUH Attributes: Unused 00:07:33.454 RUH Usage Desc #006: RUH Attributes: Unused 00:07:33.454 RUH Usage Desc #007: RUH Attributes: Unused 00:07:33.454 00:07:33.454 FDP statistics log page 00:07:33.454 ======================= 00:07:33.454 Host bytes with metadata written: 483958784 00:07:33.454 Media bytes with metadata written: 484012032 00:07:33.454 Media bytes erased: 0 00:07:33.454 00:07:33.454 FDP events log page 00:07:33.454 =================== 00:07:33.454 Number of FDP events: 0 00:07:33.454 00:07:33.454 NVM Specific Namespace Data 00:07:33.454 =========================== 00:07:33.454 Logical Block Storage Tag Mask: 0 00:07:33.454 Protection Information Capabilities: 00:07:33.454 16b Guard Protection Information Storage Tag Support: No 00:07:33.454 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.454 Storage Tag Check Read Support: No 00:07:33.454 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.454 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.454 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.454 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.454 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.454 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.454 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.454 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.454 00:07:33.454 real 0m0.944s 00:07:33.454 user 0m0.350s 00:07:33.454 sys 0m0.421s 00:07:33.454 04:57:02 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.454 04:57:02 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:33.454 ************************************ 00:07:33.454 END TEST nvme_identify 00:07:33.454 ************************************ 00:07:33.454 04:57:02 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:33.454 04:57:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.454 04:57:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.454 04:57:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.454 ************************************ 00:07:33.454 START TEST nvme_perf 00:07:33.454 ************************************ 00:07:33.454 04:57:02 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:33.454 04:57:02 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:34.829 Initializing NVMe Controllers 00:07:34.829 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:34.829 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:34.829 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:34.829 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:34.829 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:34.829 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:34.829 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:34.829 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:34.829 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:34.829 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:34.829 Initialization complete. Launching workers. 00:07:34.829 ======================================================== 00:07:34.829 Latency(us) 00:07:34.829 Device Information : IOPS MiB/s Average min max 00:07:34.829 PCIE (0000:00:13.0) NSID 1 from core 0: 17373.95 203.60 7370.18 5696.98 24753.60 00:07:34.829 PCIE (0000:00:10.0) NSID 1 from core 0: 17373.95 203.60 7363.96 5588.55 24871.63 00:07:34.829 PCIE (0000:00:11.0) NSID 1 from core 0: 17373.95 203.60 7358.68 5184.40 24920.95 00:07:34.829 PCIE (0000:00:12.0) NSID 1 from core 0: 17373.95 203.60 7352.25 4400.53 25690.52 00:07:34.829 PCIE (0000:00:12.0) NSID 2 from core 0: 17373.95 203.60 7345.99 3834.92 25523.24 00:07:34.829 PCIE (0000:00:12.0) NSID 3 from core 0: 17373.95 203.60 7339.80 3329.26 25409.71 00:07:34.829 ======================================================== 00:07:34.829 Total : 104243.68 1221.61 7355.14 3329.26 25690.52 00:07:34.829 00:07:34.829 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:34.829 ================================================================================= 00:07:34.829 1.00000% : 5847.828us 00:07:34.829 10.00000% : 6099.889us 00:07:34.829 25.00000% : 6402.363us 00:07:34.829 50.00000% : 6755.249us 00:07:34.829 75.00000% : 7410.609us 00:07:34.829 90.00000% : 9477.514us 00:07:34.829 95.00000% : 11443.594us 00:07:34.829 98.00000% : 13812.972us 00:07:34.829 99.00000% : 16031.114us 00:07:34.829 99.50000% : 16938.535us 00:07:34.829 99.90000% : 24500.382us 00:07:34.829 99.99000% : 24802.855us 00:07:34.829 99.99900% : 24802.855us 00:07:34.829 99.99990% : 24802.855us 00:07:34.829 99.99999% : 24802.855us 00:07:34.830 00:07:34.830 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:34.830 ================================================================================= 00:07:34.830 1.00000% : 5772.209us 00:07:34.830 10.00000% : 6049.477us 00:07:34.830 25.00000% : 6377.157us 00:07:34.830 50.00000% : 6805.662us 00:07:34.830 75.00000% : 7461.022us 00:07:34.830 90.00000% : 9427.102us 00:07:34.830 95.00000% : 11241.945us 00:07:34.830 98.00000% : 13712.148us 00:07:34.830 99.00000% : 16031.114us 00:07:34.830 99.50000% : 17039.360us 00:07:34.830 99.90000% : 24601.206us 00:07:34.830 99.99000% : 24903.680us 00:07:34.830 99.99900% : 24903.680us 00:07:34.830 99.99990% : 24903.680us 00:07:34.830 99.99999% : 24903.680us 00:07:34.830 00:07:34.830 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:34.830 ================================================================================= 00:07:34.830 1.00000% : 5847.828us 00:07:34.830 10.00000% : 6099.889us 00:07:34.830 25.00000% : 6402.363us 00:07:34.830 50.00000% : 6755.249us 00:07:34.830 75.00000% : 7410.609us 00:07:34.830 90.00000% : 9326.277us 00:07:34.830 95.00000% : 11191.532us 00:07:34.830 98.00000% : 13913.797us 00:07:34.830 99.00000% : 16031.114us 00:07:34.830 99.50000% : 16636.062us 00:07:34.830 99.90000% : 24702.031us 00:07:34.830 99.99000% : 24903.680us 00:07:34.830 99.99900% : 25004.505us 00:07:34.830 99.99990% : 25004.505us 00:07:34.830 99.99999% : 25004.505us 00:07:34.830 00:07:34.830 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:34.830 ================================================================================= 00:07:34.830 1.00000% : 5847.828us 00:07:34.830 10.00000% : 6099.889us 00:07:34.830 25.00000% : 6402.363us 00:07:34.830 50.00000% : 6755.249us 00:07:34.830 75.00000% : 7410.609us 00:07:34.830 90.00000% : 9225.452us 00:07:34.830 95.00000% : 11191.532us 00:07:34.830 98.00000% : 13712.148us 00:07:34.830 99.00000% : 16333.588us 00:07:34.830 99.50000% : 16938.535us 00:07:34.830 99.90000% : 25407.803us 00:07:34.830 99.99000% : 25710.277us 00:07:34.830 99.99900% : 25710.277us 00:07:34.830 99.99990% : 25710.277us 00:07:34.830 99.99999% : 25710.277us 00:07:34.830 00:07:34.830 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:34.830 ================================================================================= 00:07:34.830 1.00000% : 5847.828us 00:07:34.830 10.00000% : 6099.889us 00:07:34.830 25.00000% : 6402.363us 00:07:34.830 50.00000% : 6755.249us 00:07:34.830 75.00000% : 7410.609us 00:07:34.830 90.00000% : 9225.452us 00:07:34.830 95.00000% : 11191.532us 00:07:34.830 98.00000% : 13308.849us 00:07:34.830 99.00000% : 16333.588us 00:07:34.830 99.50000% : 16938.535us 00:07:34.830 99.90000% : 25306.978us 00:07:34.830 99.99000% : 25508.628us 00:07:34.830 99.99900% : 25609.452us 00:07:34.830 99.99990% : 25609.452us 00:07:34.830 99.99999% : 25609.452us 00:07:34.830 00:07:34.830 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:34.830 ================================================================================= 00:07:34.830 1.00000% : 5847.828us 00:07:34.830 10.00000% : 6099.889us 00:07:34.830 25.00000% : 6402.363us 00:07:34.830 50.00000% : 6755.249us 00:07:34.830 75.00000% : 7410.609us 00:07:34.830 90.00000% : 9275.865us 00:07:34.830 95.00000% : 11191.532us 00:07:34.830 98.00000% : 13107.200us 00:07:34.830 99.00000% : 16333.588us 00:07:34.830 99.50000% : 17039.360us 00:07:34.830 99.90000% : 25206.154us 00:07:34.830 99.99000% : 25407.803us 00:07:34.830 99.99900% : 25508.628us 00:07:34.830 99.99990% : 25508.628us 00:07:34.830 99.99999% : 25508.628us 00:07:34.830 00:07:34.830 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:34.830 ============================================================================== 00:07:34.830 Range in us Cumulative IO count 00:07:34.830 5696.591 - 5721.797: 0.0345% ( 6) 00:07:34.830 5721.797 - 5747.003: 0.0574% ( 4) 00:07:34.830 5747.003 - 5772.209: 0.1781% ( 21) 00:07:34.830 5772.209 - 5797.415: 0.3389% ( 28) 00:07:34.830 5797.415 - 5822.622: 0.6204% ( 49) 00:07:34.830 5822.622 - 5847.828: 1.0570% ( 76) 00:07:34.830 5847.828 - 5873.034: 1.5223% ( 81) 00:07:34.830 5873.034 - 5898.240: 2.1772% ( 114) 00:07:34.830 5898.240 - 5923.446: 3.0733% ( 156) 00:07:34.830 5923.446 - 5948.652: 4.0614% ( 172) 00:07:34.830 5948.652 - 5973.858: 5.0551% ( 173) 00:07:34.830 5973.858 - 5999.065: 6.1409% ( 189) 00:07:34.830 5999.065 - 6024.271: 7.2783% ( 198) 00:07:34.830 6024.271 - 6049.477: 8.4042% ( 196) 00:07:34.830 6049.477 - 6074.683: 9.5588% ( 201) 00:07:34.830 6074.683 - 6099.889: 10.6790% ( 195) 00:07:34.830 6099.889 - 6125.095: 11.8107% ( 197) 00:07:34.830 6125.095 - 6150.302: 12.9481% ( 198) 00:07:34.830 6150.302 - 6175.508: 14.1716% ( 213) 00:07:34.830 6175.508 - 6200.714: 15.3895% ( 212) 00:07:34.830 6200.714 - 6225.920: 16.5499% ( 202) 00:07:34.830 6225.920 - 6251.126: 17.7102% ( 202) 00:07:34.830 6251.126 - 6276.332: 18.9166% ( 210) 00:07:34.830 6276.332 - 6301.538: 20.1287% ( 211) 00:07:34.830 6301.538 - 6326.745: 21.3752% ( 217) 00:07:34.830 6326.745 - 6351.951: 22.7826% ( 245) 00:07:34.830 6351.951 - 6377.157: 24.2590% ( 257) 00:07:34.830 6377.157 - 6402.363: 25.8215% ( 272) 00:07:34.830 6402.363 - 6427.569: 27.4242% ( 279) 00:07:34.830 6427.569 - 6452.775: 29.2279% ( 314) 00:07:34.830 6452.775 - 6503.188: 32.8872% ( 637) 00:07:34.830 6503.188 - 6553.600: 36.5119% ( 631) 00:07:34.830 6553.600 - 6604.012: 40.4239% ( 681) 00:07:34.830 6604.012 - 6654.425: 44.1923% ( 656) 00:07:34.830 6654.425 - 6704.837: 47.5299% ( 581) 00:07:34.830 6704.837 - 6755.249: 50.5572% ( 527) 00:07:34.830 6755.249 - 6805.662: 52.9412% ( 415) 00:07:34.830 6805.662 - 6856.074: 55.0609% ( 369) 00:07:34.830 6856.074 - 6906.486: 57.2151% ( 375) 00:07:34.830 6906.486 - 6956.898: 59.3405% ( 370) 00:07:34.830 6956.898 - 7007.311: 61.4545% ( 368) 00:07:34.830 7007.311 - 7057.723: 63.5570% ( 366) 00:07:34.830 7057.723 - 7108.135: 65.7112% ( 375) 00:07:34.830 7108.135 - 7158.548: 67.9170% ( 384) 00:07:34.830 7158.548 - 7208.960: 69.9506% ( 354) 00:07:34.830 7208.960 - 7259.372: 71.5820% ( 284) 00:07:34.830 7259.372 - 7309.785: 73.0009% ( 247) 00:07:34.830 7309.785 - 7360.197: 74.1211% ( 195) 00:07:34.830 7360.197 - 7410.609: 75.1608% ( 181) 00:07:34.830 7410.609 - 7461.022: 76.1661% ( 175) 00:07:34.830 7461.022 - 7511.434: 76.9991% ( 145) 00:07:34.830 7511.434 - 7561.846: 77.8263% ( 144) 00:07:34.830 7561.846 - 7612.258: 78.5788% ( 131) 00:07:34.830 7612.258 - 7662.671: 79.2624% ( 119) 00:07:34.830 7662.671 - 7713.083: 79.9345% ( 117) 00:07:34.830 7713.083 - 7763.495: 80.5722% ( 111) 00:07:34.830 7763.495 - 7813.908: 81.2328% ( 115) 00:07:34.830 7813.908 - 7864.320: 81.7785% ( 95) 00:07:34.830 7864.320 - 7914.732: 82.3702% ( 103) 00:07:34.830 7914.732 - 7965.145: 82.8757% ( 88) 00:07:34.830 7965.145 - 8015.557: 83.3123% ( 76) 00:07:34.830 8015.557 - 8065.969: 83.6972% ( 67) 00:07:34.831 8065.969 - 8116.382: 84.0648% ( 64) 00:07:34.831 8116.382 - 8166.794: 84.4727% ( 71) 00:07:34.831 8166.794 - 8217.206: 84.8173% ( 60) 00:07:34.831 8217.206 - 8267.618: 85.1275% ( 54) 00:07:34.831 8267.618 - 8318.031: 85.4262% ( 52) 00:07:34.831 8318.031 - 8368.443: 85.6962% ( 47) 00:07:34.831 8368.443 - 8418.855: 85.9777% ( 49) 00:07:34.831 8418.855 - 8469.268: 86.2822% ( 53) 00:07:34.831 8469.268 - 8519.680: 86.5751% ( 51) 00:07:34.831 8519.680 - 8570.092: 86.8222% ( 43) 00:07:34.831 8570.092 - 8620.505: 87.0634% ( 42) 00:07:34.831 8620.505 - 8670.917: 87.2760% ( 37) 00:07:34.831 8670.917 - 8721.329: 87.4426% ( 29) 00:07:34.831 8721.329 - 8771.742: 87.6321% ( 33) 00:07:34.831 8771.742 - 8822.154: 87.8389% ( 36) 00:07:34.831 8822.154 - 8872.566: 88.0227% ( 32) 00:07:34.831 8872.566 - 8922.978: 88.1893% ( 29) 00:07:34.831 8922.978 - 8973.391: 88.3617% ( 30) 00:07:34.831 8973.391 - 9023.803: 88.5398% ( 31) 00:07:34.831 9023.803 - 9074.215: 88.7178% ( 31) 00:07:34.831 9074.215 - 9124.628: 88.8844% ( 29) 00:07:34.831 9124.628 - 9175.040: 89.0223% ( 24) 00:07:34.831 9175.040 - 9225.452: 89.2119% ( 33) 00:07:34.831 9225.452 - 9275.865: 89.3842% ( 30) 00:07:34.831 9275.865 - 9326.277: 89.5852% ( 35) 00:07:34.831 9326.277 - 9376.689: 89.7691% ( 32) 00:07:34.831 9376.689 - 9427.102: 89.9414% ( 30) 00:07:34.831 9427.102 - 9477.514: 90.1023% ( 28) 00:07:34.831 9477.514 - 9527.926: 90.2688% ( 29) 00:07:34.831 9527.926 - 9578.338: 90.4584% ( 33) 00:07:34.831 9578.338 - 9628.751: 90.6365% ( 31) 00:07:34.831 9628.751 - 9679.163: 90.7916% ( 27) 00:07:34.831 9679.163 - 9729.575: 90.9467% ( 27) 00:07:34.831 9729.575 - 9779.988: 91.1650% ( 38) 00:07:34.831 9779.988 - 9830.400: 91.3488% ( 32) 00:07:34.831 9830.400 - 9880.812: 91.4924% ( 25) 00:07:34.831 9880.812 - 9931.225: 91.6820% ( 33) 00:07:34.831 9931.225 - 9981.637: 91.8428% ( 28) 00:07:34.831 9981.637 - 10032.049: 92.0152% ( 30) 00:07:34.831 10032.049 - 10082.462: 92.2105% ( 34) 00:07:34.831 10082.462 - 10132.874: 92.3483% ( 24) 00:07:34.831 10132.874 - 10183.286: 92.4977% ( 26) 00:07:34.831 10183.286 - 10233.698: 92.6356% ( 24) 00:07:34.831 10233.698 - 10284.111: 92.7849% ( 26) 00:07:34.831 10284.111 - 10334.523: 92.9285% ( 25) 00:07:34.831 10334.523 - 10384.935: 93.0549% ( 22) 00:07:34.831 10384.935 - 10435.348: 93.1985% ( 25) 00:07:34.831 10435.348 - 10485.760: 93.3134% ( 20) 00:07:34.831 10485.760 - 10536.172: 93.4398% ( 22) 00:07:34.831 10536.172 - 10586.585: 93.5375% ( 17) 00:07:34.831 10586.585 - 10636.997: 93.6294% ( 16) 00:07:34.831 10636.997 - 10687.409: 93.7443% ( 20) 00:07:34.831 10687.409 - 10737.822: 93.8534% ( 19) 00:07:34.831 10737.822 - 10788.234: 93.9625% ( 19) 00:07:34.831 10788.234 - 10838.646: 94.0659% ( 18) 00:07:34.831 10838.646 - 10889.058: 94.1464% ( 14) 00:07:34.831 10889.058 - 10939.471: 94.2325% ( 15) 00:07:34.831 10939.471 - 10989.883: 94.3187% ( 15) 00:07:34.831 10989.883 - 11040.295: 94.4336% ( 20) 00:07:34.831 11040.295 - 11090.708: 94.5198% ( 15) 00:07:34.831 11090.708 - 11141.120: 94.5944% ( 13) 00:07:34.831 11141.120 - 11191.532: 94.6806% ( 15) 00:07:34.831 11191.532 - 11241.945: 94.7553% ( 13) 00:07:34.831 11241.945 - 11292.357: 94.8127% ( 10) 00:07:34.831 11292.357 - 11342.769: 94.8932% ( 14) 00:07:34.831 11342.769 - 11393.182: 94.9908% ( 17) 00:07:34.831 11393.182 - 11443.594: 95.1000% ( 19) 00:07:34.831 11443.594 - 11494.006: 95.2321% ( 23) 00:07:34.831 11494.006 - 11544.418: 95.3527% ( 21) 00:07:34.831 11544.418 - 11594.831: 95.5021% ( 26) 00:07:34.831 11594.831 - 11645.243: 95.6284% ( 22) 00:07:34.831 11645.243 - 11695.655: 95.7376% ( 19) 00:07:34.831 11695.655 - 11746.068: 95.8525% ( 20) 00:07:34.831 11746.068 - 11796.480: 95.9616% ( 19) 00:07:34.831 11796.480 - 11846.892: 96.0765% ( 20) 00:07:34.831 11846.892 - 11897.305: 96.1914% ( 20) 00:07:34.831 11897.305 - 11947.717: 96.2776% ( 15) 00:07:34.831 11947.717 - 11998.129: 96.3752% ( 17) 00:07:34.831 11998.129 - 12048.542: 96.4671% ( 16) 00:07:34.831 12048.542 - 12098.954: 96.5705% ( 18) 00:07:34.831 12098.954 - 12149.366: 96.6739% ( 18) 00:07:34.831 12149.366 - 12199.778: 96.8118% ( 24) 00:07:34.831 12199.778 - 12250.191: 96.9497% ( 24) 00:07:34.831 12250.191 - 12300.603: 97.0646% ( 20) 00:07:34.831 12300.603 - 12351.015: 97.1220% ( 10) 00:07:34.831 12351.015 - 12401.428: 97.1909% ( 12) 00:07:34.831 12401.428 - 12451.840: 97.2484% ( 10) 00:07:34.831 12451.840 - 12502.252: 97.3058% ( 10) 00:07:34.831 12502.252 - 12552.665: 97.3690% ( 11) 00:07:34.831 12552.665 - 12603.077: 97.4265% ( 10) 00:07:34.831 12603.077 - 12653.489: 97.4839% ( 10) 00:07:34.831 12653.489 - 12703.902: 97.5241% ( 7) 00:07:34.831 12703.902 - 12754.314: 97.5701% ( 8) 00:07:34.831 12754.314 - 12804.726: 97.6103% ( 7) 00:07:34.831 12804.726 - 12855.138: 97.6562% ( 8) 00:07:34.831 12855.138 - 12905.551: 97.6907% ( 6) 00:07:34.831 12905.551 - 13006.375: 97.7769% ( 15) 00:07:34.831 13006.375 - 13107.200: 97.7941% ( 3) 00:07:34.831 13107.200 - 13208.025: 97.8114% ( 3) 00:07:34.831 13208.025 - 13308.849: 97.8516% ( 7) 00:07:34.831 13308.849 - 13409.674: 97.8860% ( 6) 00:07:34.831 13409.674 - 13510.498: 97.9205% ( 6) 00:07:34.831 13510.498 - 13611.323: 97.9607% ( 7) 00:07:34.831 13611.323 - 13712.148: 97.9952% ( 6) 00:07:34.831 13712.148 - 13812.972: 98.0354% ( 7) 00:07:34.831 13812.972 - 13913.797: 98.0699% ( 6) 00:07:34.831 13913.797 - 14014.622: 98.1101% ( 7) 00:07:34.831 14014.622 - 14115.446: 98.1503% ( 7) 00:07:34.831 14115.446 - 14216.271: 98.1618% ( 2) 00:07:34.831 14720.394 - 14821.218: 98.1847% ( 4) 00:07:34.831 14821.218 - 14922.043: 98.2077% ( 4) 00:07:34.831 14922.043 - 15022.868: 98.2594% ( 9) 00:07:34.831 15022.868 - 15123.692: 98.3226% ( 11) 00:07:34.831 15123.692 - 15224.517: 98.3801% ( 10) 00:07:34.831 15224.517 - 15325.342: 98.4375% ( 10) 00:07:34.831 15325.342 - 15426.166: 98.5007% ( 11) 00:07:34.831 15426.166 - 15526.991: 98.5811% ( 14) 00:07:34.831 15526.991 - 15627.815: 98.6730% ( 16) 00:07:34.831 15627.815 - 15728.640: 98.7879% ( 20) 00:07:34.831 15728.640 - 15829.465: 98.9028% ( 20) 00:07:34.831 15829.465 - 15930.289: 98.9890% ( 15) 00:07:34.831 15930.289 - 16031.114: 99.0751% ( 15) 00:07:34.831 16031.114 - 16131.938: 99.1326% ( 10) 00:07:34.831 16131.938 - 16232.763: 99.1785% ( 8) 00:07:34.831 16232.763 - 16333.588: 99.2532% ( 13) 00:07:34.831 16333.588 - 16434.412: 99.3222% ( 12) 00:07:34.831 16434.412 - 16535.237: 99.3796% ( 10) 00:07:34.831 16535.237 - 16636.062: 99.4141% ( 6) 00:07:34.831 16636.062 - 16736.886: 99.4428% ( 5) 00:07:34.831 16736.886 - 16837.711: 99.4773% ( 6) 00:07:34.831 16837.711 - 16938.535: 99.5117% ( 6) 00:07:34.831 16938.535 - 17039.360: 99.5462% ( 6) 00:07:34.831 17039.360 - 17140.185: 99.5807% ( 6) 00:07:34.832 17140.185 - 17241.009: 99.6209% ( 7) 00:07:34.832 17241.009 - 17341.834: 99.6324% ( 2) 00:07:34.832 23492.135 - 23592.960: 99.6553% ( 4) 00:07:34.832 23592.960 - 23693.785: 99.6898% ( 6) 00:07:34.832 23693.785 - 23794.609: 99.7300% ( 7) 00:07:34.832 23794.609 - 23895.434: 99.7645% ( 6) 00:07:34.832 23895.434 - 23996.258: 99.7989% ( 6) 00:07:34.832 23996.258 - 24097.083: 99.8334% ( 6) 00:07:34.832 24097.083 - 24197.908: 99.8679% ( 6) 00:07:34.832 24197.908 - 24298.732: 99.8736% ( 1) 00:07:34.832 24399.557 - 24500.382: 99.9138% ( 7) 00:07:34.832 24500.382 - 24601.206: 99.9483% ( 6) 00:07:34.832 24601.206 - 24702.031: 99.9828% ( 6) 00:07:34.832 24702.031 - 24802.855: 100.0000% ( 3) 00:07:34.832 00:07:34.832 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:34.832 ============================================================================== 00:07:34.832 Range in us Cumulative IO count 00:07:34.832 5570.560 - 5595.766: 0.0057% ( 1) 00:07:34.832 5595.766 - 5620.972: 0.0172% ( 2) 00:07:34.832 5620.972 - 5646.178: 0.0287% ( 2) 00:07:34.832 5646.178 - 5671.385: 0.0919% ( 11) 00:07:34.832 5671.385 - 5696.591: 0.1723% ( 14) 00:07:34.832 5696.591 - 5721.797: 0.3964% ( 39) 00:07:34.832 5721.797 - 5747.003: 0.6664% ( 47) 00:07:34.832 5747.003 - 5772.209: 1.0398% ( 65) 00:07:34.832 5772.209 - 5797.415: 1.5108% ( 82) 00:07:34.832 5797.415 - 5822.622: 2.0852% ( 100) 00:07:34.832 5822.622 - 5847.828: 2.8780% ( 138) 00:07:34.832 5847.828 - 5873.034: 3.7799% ( 157) 00:07:34.832 5873.034 - 5898.240: 4.7335% ( 166) 00:07:34.832 5898.240 - 5923.446: 5.6124% ( 153) 00:07:34.832 5923.446 - 5948.652: 6.5315% ( 160) 00:07:34.832 5948.652 - 5973.858: 7.4621% ( 162) 00:07:34.832 5973.858 - 5999.065: 8.4272% ( 168) 00:07:34.832 5999.065 - 6024.271: 9.3693% ( 164) 00:07:34.832 6024.271 - 6049.477: 10.2826% ( 159) 00:07:34.832 6049.477 - 6074.683: 11.1960% ( 159) 00:07:34.832 6074.683 - 6099.889: 12.1438% ( 165) 00:07:34.832 6099.889 - 6125.095: 13.1951% ( 183) 00:07:34.832 6125.095 - 6150.302: 14.1659% ( 169) 00:07:34.832 6150.302 - 6175.508: 15.0965% ( 162) 00:07:34.832 6175.508 - 6200.714: 16.1305% ( 180) 00:07:34.832 6200.714 - 6225.920: 17.1530% ( 178) 00:07:34.832 6225.920 - 6251.126: 18.1698% ( 177) 00:07:34.832 6251.126 - 6276.332: 19.4623% ( 225) 00:07:34.832 6276.332 - 6301.538: 20.7606% ( 226) 00:07:34.832 6301.538 - 6326.745: 22.1105% ( 235) 00:07:34.832 6326.745 - 6351.951: 23.6098% ( 261) 00:07:34.832 6351.951 - 6377.157: 25.1494% ( 268) 00:07:34.832 6377.157 - 6402.363: 26.7463% ( 278) 00:07:34.832 6402.363 - 6427.569: 28.2341% ( 259) 00:07:34.832 6427.569 - 6452.775: 29.7449% ( 263) 00:07:34.832 6452.775 - 6503.188: 33.1170% ( 587) 00:07:34.832 6503.188 - 6553.600: 36.4258% ( 576) 00:07:34.832 6553.600 - 6604.012: 39.7576% ( 580) 00:07:34.832 6604.012 - 6654.425: 43.1698% ( 594) 00:07:34.832 6654.425 - 6704.837: 46.6510% ( 606) 00:07:34.832 6704.837 - 6755.249: 49.7300% ( 536) 00:07:34.832 6755.249 - 6805.662: 52.6023% ( 500) 00:07:34.832 6805.662 - 6856.074: 55.0092% ( 419) 00:07:34.832 6856.074 - 6906.486: 57.0485% ( 355) 00:07:34.832 6906.486 - 6956.898: 59.0648% ( 351) 00:07:34.832 6956.898 - 7007.311: 60.9432% ( 327) 00:07:34.832 7007.311 - 7057.723: 62.8734% ( 336) 00:07:34.832 7057.723 - 7108.135: 64.8725% ( 348) 00:07:34.832 7108.135 - 7158.548: 66.8716% ( 348) 00:07:34.832 7158.548 - 7208.960: 68.6638% ( 312) 00:07:34.832 7208.960 - 7259.372: 70.6342% ( 343) 00:07:34.832 7259.372 - 7309.785: 72.3288% ( 295) 00:07:34.832 7309.785 - 7360.197: 73.6386% ( 228) 00:07:34.832 7360.197 - 7410.609: 74.8449% ( 210) 00:07:34.832 7410.609 - 7461.022: 75.8330% ( 172) 00:07:34.832 7461.022 - 7511.434: 76.7750% ( 164) 00:07:34.832 7511.434 - 7561.846: 77.6310% ( 149) 00:07:34.832 7561.846 - 7612.258: 78.3892% ( 132) 00:07:34.832 7612.258 - 7662.671: 79.1073% ( 125) 00:07:34.832 7662.671 - 7713.083: 79.8081% ( 122) 00:07:34.832 7713.083 - 7763.495: 80.4343% ( 109) 00:07:34.832 7763.495 - 7813.908: 81.0777% ( 112) 00:07:34.832 7813.908 - 7864.320: 81.6693% ( 103) 00:07:34.832 7864.320 - 7914.732: 82.2208% ( 96) 00:07:34.832 7914.732 - 7965.145: 82.7493% ( 92) 00:07:34.832 7965.145 - 8015.557: 83.2663% ( 90) 00:07:34.832 8015.557 - 8065.969: 83.7259% ( 80) 00:07:34.832 8065.969 - 8116.382: 84.0763% ( 61) 00:07:34.832 8116.382 - 8166.794: 84.4095% ( 58) 00:07:34.832 8166.794 - 8217.206: 84.7197% ( 54) 00:07:34.832 8217.206 - 8267.618: 85.0471% ( 57) 00:07:34.832 8267.618 - 8318.031: 85.3056% ( 45) 00:07:34.832 8318.031 - 8368.443: 85.5699% ( 46) 00:07:34.832 8368.443 - 8418.855: 85.7767% ( 36) 00:07:34.832 8418.855 - 8469.268: 86.0581% ( 49) 00:07:34.832 8469.268 - 8519.680: 86.2649% ( 36) 00:07:34.832 8519.680 - 8570.092: 86.5349% ( 47) 00:07:34.832 8570.092 - 8620.505: 86.7130% ( 31) 00:07:34.832 8620.505 - 8670.917: 87.0290% ( 55) 00:07:34.832 8670.917 - 8721.329: 87.2415% ( 37) 00:07:34.832 8721.329 - 8771.742: 87.4713% ( 40) 00:07:34.832 8771.742 - 8822.154: 87.6666% ( 34) 00:07:34.832 8822.154 - 8872.566: 87.8504% ( 32) 00:07:34.832 8872.566 - 8922.978: 88.0687% ( 38) 00:07:34.832 8922.978 - 8973.391: 88.2985% ( 40) 00:07:34.832 8973.391 - 9023.803: 88.4823% ( 32) 00:07:34.832 9023.803 - 9074.215: 88.6719% ( 33) 00:07:34.832 9074.215 - 9124.628: 88.9246% ( 44) 00:07:34.832 9124.628 - 9175.040: 89.1142% ( 33) 00:07:34.832 9175.040 - 9225.452: 89.3612% ( 43) 00:07:34.832 9225.452 - 9275.865: 89.5795% ( 38) 00:07:34.832 9275.865 - 9326.277: 89.7518% ( 30) 00:07:34.832 9326.277 - 9376.689: 89.9357% ( 32) 00:07:34.832 9376.689 - 9427.102: 90.1367% ( 35) 00:07:34.832 9427.102 - 9477.514: 90.3033% ( 29) 00:07:34.832 9477.514 - 9527.926: 90.4814% ( 31) 00:07:34.832 9527.926 - 9578.338: 90.6652% ( 32) 00:07:34.832 9578.338 - 9628.751: 90.8490% ( 32) 00:07:34.832 9628.751 - 9679.163: 91.0156% ( 29) 00:07:34.832 9679.163 - 9729.575: 91.1765% ( 28) 00:07:34.832 9729.575 - 9779.988: 91.3431% ( 29) 00:07:34.832 9779.988 - 9830.400: 91.4752% ( 23) 00:07:34.832 9830.400 - 9880.812: 91.5958% ( 21) 00:07:34.832 9880.812 - 9931.225: 91.7969% ( 35) 00:07:34.832 9931.225 - 9981.637: 91.9635% ( 29) 00:07:34.832 9981.637 - 10032.049: 92.1358% ( 30) 00:07:34.832 10032.049 - 10082.462: 92.3139% ( 31) 00:07:34.832 10082.462 - 10132.874: 92.5092% ( 34) 00:07:34.832 10132.874 - 10183.286: 92.6758% ( 29) 00:07:34.832 10183.286 - 10233.698: 92.8768% ( 35) 00:07:34.832 10233.698 - 10284.111: 93.0262% ( 26) 00:07:34.832 10284.111 - 10334.523: 93.1756% ( 26) 00:07:34.832 10334.523 - 10384.935: 93.2904% ( 20) 00:07:34.832 10384.935 - 10435.348: 93.4398% ( 26) 00:07:34.832 10435.348 - 10485.760: 93.5719% ( 23) 00:07:34.832 10485.760 - 10536.172: 93.6983% ( 22) 00:07:34.832 10536.172 - 10586.585: 93.8017% ( 18) 00:07:34.832 10586.585 - 10636.997: 93.9338% ( 23) 00:07:34.832 10636.997 - 10687.409: 94.0200% ( 15) 00:07:34.832 10687.409 - 10737.822: 94.1234% ( 18) 00:07:34.832 10737.822 - 10788.234: 94.2440% ( 21) 00:07:34.832 10788.234 - 10838.646: 94.3532% ( 19) 00:07:34.832 10838.646 - 10889.058: 94.4623% ( 19) 00:07:34.832 10889.058 - 10939.471: 94.5944% ( 23) 00:07:34.832 10939.471 - 10989.883: 94.6749% ( 14) 00:07:34.832 10989.883 - 11040.295: 94.7553% ( 14) 00:07:34.832 11040.295 - 11090.708: 94.8185% ( 11) 00:07:34.833 11090.708 - 11141.120: 94.9046% ( 15) 00:07:34.833 11141.120 - 11191.532: 94.9908% ( 15) 00:07:34.833 11191.532 - 11241.945: 95.0885% ( 17) 00:07:34.833 11241.945 - 11292.357: 95.1689% ( 14) 00:07:34.833 11292.357 - 11342.769: 95.2436% ( 13) 00:07:34.833 11342.769 - 11393.182: 95.3240% ( 14) 00:07:34.833 11393.182 - 11443.594: 95.4102% ( 15) 00:07:34.833 11443.594 - 11494.006: 95.4906% ( 14) 00:07:34.833 11494.006 - 11544.418: 95.5825% ( 16) 00:07:34.833 11544.418 - 11594.831: 95.6687% ( 15) 00:07:34.833 11594.831 - 11645.243: 95.7606% ( 16) 00:07:34.833 11645.243 - 11695.655: 95.8352% ( 13) 00:07:34.833 11695.655 - 11746.068: 95.9157% ( 14) 00:07:34.833 11746.068 - 11796.480: 96.0191% ( 18) 00:07:34.833 11796.480 - 11846.892: 96.1110% ( 16) 00:07:34.833 11846.892 - 11897.305: 96.1857% ( 13) 00:07:34.833 11897.305 - 11947.717: 96.2489% ( 11) 00:07:34.833 11947.717 - 11998.129: 96.3408% ( 16) 00:07:34.833 11998.129 - 12048.542: 96.4040% ( 11) 00:07:34.833 12048.542 - 12098.954: 96.5074% ( 18) 00:07:34.833 12098.954 - 12149.366: 96.5993% ( 16) 00:07:34.833 12149.366 - 12199.778: 96.6912% ( 16) 00:07:34.833 12199.778 - 12250.191: 96.7716% ( 14) 00:07:34.833 12250.191 - 12300.603: 96.8807% ( 19) 00:07:34.833 12300.603 - 12351.015: 96.9439% ( 11) 00:07:34.833 12351.015 - 12401.428: 97.0014% ( 10) 00:07:34.833 12401.428 - 12451.840: 97.0588% ( 10) 00:07:34.833 12451.840 - 12502.252: 97.1105% ( 9) 00:07:34.833 12502.252 - 12552.665: 97.1450% ( 6) 00:07:34.833 12552.665 - 12603.077: 97.1795% ( 6) 00:07:34.833 12603.077 - 12653.489: 97.2312% ( 9) 00:07:34.833 12653.489 - 12703.902: 97.2943% ( 11) 00:07:34.833 12703.902 - 12754.314: 97.3288% ( 6) 00:07:34.833 12754.314 - 12804.726: 97.3690% ( 7) 00:07:34.833 12804.726 - 12855.138: 97.4092% ( 7) 00:07:34.833 12855.138 - 12905.551: 97.4494% ( 7) 00:07:34.833 12905.551 - 13006.375: 97.5184% ( 12) 00:07:34.833 13006.375 - 13107.200: 97.6160% ( 17) 00:07:34.833 13107.200 - 13208.025: 97.6965% ( 14) 00:07:34.833 13208.025 - 13308.849: 97.7597% ( 11) 00:07:34.833 13308.849 - 13409.674: 97.8228% ( 11) 00:07:34.833 13409.674 - 13510.498: 97.8975% ( 13) 00:07:34.833 13510.498 - 13611.323: 97.9607% ( 11) 00:07:34.833 13611.323 - 13712.148: 98.0124% ( 9) 00:07:34.833 13712.148 - 13812.972: 98.0411% ( 5) 00:07:34.833 13812.972 - 13913.797: 98.0756% ( 6) 00:07:34.833 13913.797 - 14014.622: 98.1101% ( 6) 00:07:34.833 14014.622 - 14115.446: 98.1330% ( 4) 00:07:34.833 14115.446 - 14216.271: 98.1618% ( 5) 00:07:34.833 14317.095 - 14417.920: 98.1962% ( 6) 00:07:34.833 14417.920 - 14518.745: 98.2250% ( 5) 00:07:34.833 14518.745 - 14619.569: 98.2594% ( 6) 00:07:34.833 14619.569 - 14720.394: 98.2881% ( 5) 00:07:34.833 14720.394 - 14821.218: 98.3398% ( 9) 00:07:34.833 14821.218 - 14922.043: 98.3973% ( 10) 00:07:34.833 14922.043 - 15022.868: 98.4547% ( 10) 00:07:34.833 15022.868 - 15123.692: 98.5064% ( 9) 00:07:34.833 15123.692 - 15224.517: 98.5696% ( 11) 00:07:34.833 15224.517 - 15325.342: 98.6213% ( 9) 00:07:34.833 15325.342 - 15426.166: 98.6788% ( 10) 00:07:34.833 15426.166 - 15526.991: 98.7305% ( 9) 00:07:34.833 15526.991 - 15627.815: 98.7937% ( 11) 00:07:34.833 15627.815 - 15728.640: 98.8511% ( 10) 00:07:34.833 15728.640 - 15829.465: 98.9028% ( 9) 00:07:34.833 15829.465 - 15930.289: 98.9660% ( 11) 00:07:34.833 15930.289 - 16031.114: 99.0234% ( 10) 00:07:34.833 16031.114 - 16131.938: 99.0636% ( 7) 00:07:34.833 16131.938 - 16232.763: 99.1268% ( 11) 00:07:34.833 16232.763 - 16333.588: 99.1900% ( 11) 00:07:34.833 16333.588 - 16434.412: 99.2475% ( 10) 00:07:34.833 16434.412 - 16535.237: 99.2992% ( 9) 00:07:34.833 16535.237 - 16636.062: 99.3624% ( 11) 00:07:34.833 16636.062 - 16736.886: 99.4198% ( 10) 00:07:34.833 16736.886 - 16837.711: 99.4658% ( 8) 00:07:34.833 16837.711 - 16938.535: 99.4945% ( 5) 00:07:34.833 16938.535 - 17039.360: 99.5232% ( 5) 00:07:34.833 17039.360 - 17140.185: 99.5577% ( 6) 00:07:34.833 17140.185 - 17241.009: 99.5864% ( 5) 00:07:34.833 17241.009 - 17341.834: 99.6094% ( 4) 00:07:34.833 17341.834 - 17442.658: 99.6324% ( 4) 00:07:34.833 23592.960 - 23693.785: 99.6438% ( 2) 00:07:34.833 23693.785 - 23794.609: 99.6726% ( 5) 00:07:34.833 23794.609 - 23895.434: 99.7013% ( 5) 00:07:34.833 23895.434 - 23996.258: 99.7300% ( 5) 00:07:34.833 23996.258 - 24097.083: 99.7645% ( 6) 00:07:34.833 24097.083 - 24197.908: 99.7932% ( 5) 00:07:34.833 24197.908 - 24298.732: 99.8219% ( 5) 00:07:34.833 24298.732 - 24399.557: 99.8506% ( 5) 00:07:34.833 24399.557 - 24500.382: 99.8794% ( 5) 00:07:34.833 24500.382 - 24601.206: 99.9138% ( 6) 00:07:34.833 24601.206 - 24702.031: 99.9426% ( 5) 00:07:34.833 24702.031 - 24802.855: 99.9770% ( 6) 00:07:34.833 24802.855 - 24903.680: 100.0000% ( 4) 00:07:34.833 00:07:34.833 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:34.833 ============================================================================== 00:07:34.833 Range in us Cumulative IO count 00:07:34.833 5167.262 - 5192.468: 0.0057% ( 1) 00:07:34.833 5192.468 - 5217.674: 0.0115% ( 1) 00:07:34.833 5217.674 - 5242.880: 0.0230% ( 2) 00:07:34.833 5242.880 - 5268.086: 0.0287% ( 1) 00:07:34.833 5268.086 - 5293.292: 0.0345% ( 1) 00:07:34.833 5293.292 - 5318.498: 0.0402% ( 1) 00:07:34.833 5318.498 - 5343.705: 0.0574% ( 3) 00:07:34.833 5343.705 - 5368.911: 0.0689% ( 2) 00:07:34.833 5368.911 - 5394.117: 0.0747% ( 1) 00:07:34.833 5394.117 - 5419.323: 0.0862% ( 2) 00:07:34.833 5419.323 - 5444.529: 0.0977% ( 2) 00:07:34.833 5444.529 - 5469.735: 0.1034% ( 1) 00:07:34.833 5469.735 - 5494.942: 0.1149% ( 2) 00:07:34.833 5494.942 - 5520.148: 0.1206% ( 1) 00:07:34.833 5520.148 - 5545.354: 0.1321% ( 2) 00:07:34.833 5545.354 - 5570.560: 0.1379% ( 1) 00:07:34.833 5570.560 - 5595.766: 0.1494% ( 2) 00:07:34.833 5595.766 - 5620.972: 0.1608% ( 2) 00:07:34.833 5620.972 - 5646.178: 0.1666% ( 1) 00:07:34.833 5646.178 - 5671.385: 0.1838% ( 3) 00:07:34.833 5671.385 - 5696.591: 0.2011% ( 3) 00:07:34.833 5696.591 - 5721.797: 0.2413% ( 7) 00:07:34.833 5721.797 - 5747.003: 0.3159% ( 13) 00:07:34.833 5747.003 - 5772.209: 0.4481% ( 23) 00:07:34.833 5772.209 - 5797.415: 0.6032% ( 27) 00:07:34.833 5797.415 - 5822.622: 0.8559% ( 44) 00:07:34.833 5822.622 - 5847.828: 1.2695% ( 72) 00:07:34.833 5847.828 - 5873.034: 1.9187% ( 113) 00:07:34.833 5873.034 - 5898.240: 2.6252% ( 123) 00:07:34.833 5898.240 - 5923.446: 3.4812% ( 149) 00:07:34.833 5923.446 - 5948.652: 4.4347% ( 166) 00:07:34.833 5948.652 - 5973.858: 5.4400% ( 175) 00:07:34.833 5973.858 - 5999.065: 6.5257% ( 189) 00:07:34.833 5999.065 - 6024.271: 7.6287% ( 192) 00:07:34.833 6024.271 - 6049.477: 8.7201% ( 190) 00:07:34.833 6049.477 - 6074.683: 9.8346% ( 194) 00:07:34.833 6074.683 - 6099.889: 10.9490% ( 194) 00:07:34.833 6099.889 - 6125.095: 12.0519% ( 192) 00:07:34.834 6125.095 - 6150.302: 13.2008% ( 200) 00:07:34.834 6150.302 - 6175.508: 14.3612% ( 202) 00:07:34.834 6175.508 - 6200.714: 15.5963% ( 215) 00:07:34.834 6200.714 - 6225.920: 16.7796% ( 206) 00:07:34.834 6225.920 - 6251.126: 18.0032% ( 213) 00:07:34.834 6251.126 - 6276.332: 19.2613% ( 219) 00:07:34.834 6276.332 - 6301.538: 20.4504% ( 207) 00:07:34.834 6301.538 - 6326.745: 21.7486% ( 226) 00:07:34.834 6326.745 - 6351.951: 23.1618% ( 246) 00:07:34.834 6351.951 - 6377.157: 24.6841% ( 265) 00:07:34.834 6377.157 - 6402.363: 26.2466% ( 272) 00:07:34.834 6402.363 - 6427.569: 27.9986% ( 305) 00:07:34.834 6427.569 - 6452.775: 29.7335% ( 302) 00:07:34.834 6452.775 - 6503.188: 33.2950% ( 620) 00:07:34.834 6503.188 - 6553.600: 37.0175% ( 648) 00:07:34.834 6553.600 - 6604.012: 40.7629% ( 652) 00:07:34.834 6604.012 - 6654.425: 44.5830% ( 665) 00:07:34.834 6654.425 - 6704.837: 47.9894% ( 593) 00:07:34.834 6704.837 - 6755.249: 50.8157% ( 492) 00:07:34.834 6755.249 - 6805.662: 53.4237% ( 454) 00:07:34.834 6805.662 - 6856.074: 55.6009% ( 379) 00:07:34.834 6856.074 - 6906.486: 57.6287% ( 353) 00:07:34.834 6906.486 - 6956.898: 59.7426% ( 368) 00:07:34.834 6956.898 - 7007.311: 61.7992% ( 358) 00:07:34.834 7007.311 - 7057.723: 63.8902% ( 364) 00:07:34.834 7057.723 - 7108.135: 65.9524% ( 359) 00:07:34.834 7108.135 - 7158.548: 68.0894% ( 372) 00:07:34.834 7158.548 - 7208.960: 70.0770% ( 346) 00:07:34.834 7208.960 - 7259.372: 71.8061% ( 301) 00:07:34.834 7259.372 - 7309.785: 73.1503% ( 234) 00:07:34.834 7309.785 - 7360.197: 74.3394% ( 207) 00:07:34.834 7360.197 - 7410.609: 75.3447% ( 175) 00:07:34.834 7410.609 - 7461.022: 76.2580% ( 159) 00:07:34.834 7461.022 - 7511.434: 77.1369% ( 153) 00:07:34.834 7511.434 - 7561.846: 77.9814% ( 147) 00:07:34.834 7561.846 - 7612.258: 78.7397% ( 132) 00:07:34.834 7612.258 - 7662.671: 79.4290% ( 120) 00:07:34.834 7662.671 - 7713.083: 80.1126% ( 119) 00:07:34.834 7713.083 - 7763.495: 80.7273% ( 107) 00:07:34.834 7763.495 - 7813.908: 81.3534% ( 109) 00:07:34.834 7813.908 - 7864.320: 81.8704% ( 90) 00:07:34.834 7864.320 - 7914.732: 82.4104% ( 94) 00:07:34.834 7914.732 - 7965.145: 82.9504% ( 94) 00:07:34.834 7965.145 - 8015.557: 83.4214% ( 82) 00:07:34.834 8015.557 - 8065.969: 83.8465% ( 74) 00:07:34.834 8065.969 - 8116.382: 84.1969% ( 61) 00:07:34.834 8116.382 - 8166.794: 84.5588% ( 63) 00:07:34.834 8166.794 - 8217.206: 84.8575% ( 52) 00:07:34.834 8217.206 - 8267.618: 85.1448% ( 50) 00:07:34.834 8267.618 - 8318.031: 85.3860% ( 42) 00:07:34.834 8318.031 - 8368.443: 85.6273% ( 42) 00:07:34.834 8368.443 - 8418.855: 85.8398% ( 37) 00:07:34.834 8418.855 - 8469.268: 86.0696% ( 40) 00:07:34.834 8469.268 - 8519.680: 86.2707% ( 35) 00:07:34.834 8519.680 - 8570.092: 86.5119% ( 42) 00:07:34.834 8570.092 - 8620.505: 86.7130% ( 35) 00:07:34.834 8620.505 - 8670.917: 86.9715% ( 45) 00:07:34.834 8670.917 - 8721.329: 87.2415% ( 47) 00:07:34.834 8721.329 - 8771.742: 87.4943% ( 44) 00:07:34.834 8771.742 - 8822.154: 87.7413% ( 43) 00:07:34.834 8822.154 - 8872.566: 88.0285% ( 50) 00:07:34.834 8872.566 - 8922.978: 88.2640% ( 41) 00:07:34.834 8922.978 - 8973.391: 88.5168% ( 44) 00:07:34.834 8973.391 - 9023.803: 88.7695% ( 44) 00:07:34.834 9023.803 - 9074.215: 89.0051% ( 41) 00:07:34.834 9074.215 - 9124.628: 89.2463% ( 42) 00:07:34.834 9124.628 - 9175.040: 89.4876% ( 42) 00:07:34.834 9175.040 - 9225.452: 89.7116% ( 39) 00:07:34.834 9225.452 - 9275.865: 89.9127% ( 35) 00:07:34.834 9275.865 - 9326.277: 90.0850% ( 30) 00:07:34.834 9326.277 - 9376.689: 90.2516% ( 29) 00:07:34.834 9376.689 - 9427.102: 90.4642% ( 37) 00:07:34.834 9427.102 - 9477.514: 90.6193% ( 27) 00:07:34.834 9477.514 - 9527.926: 90.7686% ( 26) 00:07:34.834 9527.926 - 9578.338: 90.9122% ( 25) 00:07:34.834 9578.338 - 9628.751: 91.0329% ( 21) 00:07:34.834 9628.751 - 9679.163: 91.1133% ( 14) 00:07:34.834 9679.163 - 9729.575: 91.2454% ( 23) 00:07:34.834 9729.575 - 9779.988: 91.3660% ( 21) 00:07:34.834 9779.988 - 9830.400: 91.5211% ( 27) 00:07:34.834 9830.400 - 9880.812: 91.6418% ( 21) 00:07:34.834 9880.812 - 9931.225: 91.7624% ( 21) 00:07:34.834 9931.225 - 9981.637: 91.8716% ( 19) 00:07:34.834 9981.637 - 10032.049: 91.9807% ( 19) 00:07:34.834 10032.049 - 10082.462: 92.1071% ( 22) 00:07:34.834 10082.462 - 10132.874: 92.2679% ( 28) 00:07:34.834 10132.874 - 10183.286: 92.3943% ( 22) 00:07:34.834 10183.286 - 10233.698: 92.5264% ( 23) 00:07:34.834 10233.698 - 10284.111: 92.6413% ( 20) 00:07:34.834 10284.111 - 10334.523: 92.7849% ( 25) 00:07:34.834 10334.523 - 10384.935: 92.9228% ( 24) 00:07:34.834 10384.935 - 10435.348: 93.0664% ( 25) 00:07:34.834 10435.348 - 10485.760: 93.2100% ( 25) 00:07:34.834 10485.760 - 10536.172: 93.3709% ( 28) 00:07:34.834 10536.172 - 10586.585: 93.5260% ( 27) 00:07:34.834 10586.585 - 10636.997: 93.6581% ( 23) 00:07:34.834 10636.997 - 10687.409: 93.8132% ( 27) 00:07:34.834 10687.409 - 10737.822: 93.9798% ( 29) 00:07:34.834 10737.822 - 10788.234: 94.1349% ( 27) 00:07:34.834 10788.234 - 10838.646: 94.2498% ( 20) 00:07:34.834 10838.646 - 10889.058: 94.3819% ( 23) 00:07:34.834 10889.058 - 10939.471: 94.5198% ( 24) 00:07:34.834 10939.471 - 10989.883: 94.6576% ( 24) 00:07:34.834 10989.883 - 11040.295: 94.7610% ( 18) 00:07:34.834 11040.295 - 11090.708: 94.8817% ( 21) 00:07:34.834 11090.708 - 11141.120: 94.9793% ( 17) 00:07:34.834 11141.120 - 11191.532: 95.0770% ( 17) 00:07:34.834 11191.532 - 11241.945: 95.1804% ( 18) 00:07:34.834 11241.945 - 11292.357: 95.2723% ( 16) 00:07:34.834 11292.357 - 11342.769: 95.3757% ( 18) 00:07:34.834 11342.769 - 11393.182: 95.4676% ( 16) 00:07:34.834 11393.182 - 11443.594: 95.5595% ( 16) 00:07:34.834 11443.594 - 11494.006: 95.6399% ( 14) 00:07:34.834 11494.006 - 11544.418: 95.7318% ( 16) 00:07:34.834 11544.418 - 11594.831: 95.8008% ( 12) 00:07:34.834 11594.831 - 11645.243: 95.8984% ( 17) 00:07:34.834 11645.243 - 11695.655: 95.9674% ( 12) 00:07:34.834 11695.655 - 11746.068: 96.0535% ( 15) 00:07:34.834 11746.068 - 11796.480: 96.1397% ( 15) 00:07:34.834 11796.480 - 11846.892: 96.2374% ( 17) 00:07:34.834 11846.892 - 11897.305: 96.3235% ( 15) 00:07:34.834 11897.305 - 11947.717: 96.3810% ( 10) 00:07:34.834 11947.717 - 11998.129: 96.4384% ( 10) 00:07:34.834 11998.129 - 12048.542: 96.4844% ( 8) 00:07:34.834 12048.542 - 12098.954: 96.5303% ( 8) 00:07:34.834 12098.954 - 12149.366: 96.5648% ( 6) 00:07:34.834 12149.366 - 12199.778: 96.6165% ( 9) 00:07:34.834 12199.778 - 12250.191: 96.6567% ( 7) 00:07:34.834 12250.191 - 12300.603: 96.7544% ( 17) 00:07:34.834 12300.603 - 12351.015: 96.8003% ( 8) 00:07:34.834 12351.015 - 12401.428: 96.8463% ( 8) 00:07:34.834 12401.428 - 12451.840: 96.9037% ( 10) 00:07:34.834 12451.840 - 12502.252: 96.9727% ( 12) 00:07:34.834 12502.252 - 12552.665: 97.0473% ( 13) 00:07:34.834 12552.665 - 12603.077: 97.1163% ( 12) 00:07:34.835 12603.077 - 12653.489: 97.1737% ( 10) 00:07:34.835 12653.489 - 12703.902: 97.2254% ( 9) 00:07:34.835 12703.902 - 12754.314: 97.2599% ( 6) 00:07:34.835 12754.314 - 12804.726: 97.3058% ( 8) 00:07:34.835 12804.726 - 12855.138: 97.3403% ( 6) 00:07:34.835 12855.138 - 12905.551: 97.3633% ( 4) 00:07:34.835 12905.551 - 13006.375: 97.4035% ( 7) 00:07:34.835 13006.375 - 13107.200: 97.4322% ( 5) 00:07:34.835 13107.200 - 13208.025: 97.4782% ( 8) 00:07:34.835 13208.025 - 13308.849: 97.5586% ( 14) 00:07:34.835 13308.849 - 13409.674: 97.6390% ( 14) 00:07:34.835 13409.674 - 13510.498: 97.7194% ( 14) 00:07:34.835 13510.498 - 13611.323: 97.7999% ( 14) 00:07:34.835 13611.323 - 13712.148: 97.8745% ( 13) 00:07:34.835 13712.148 - 13812.972: 97.9492% ( 13) 00:07:34.835 13812.972 - 13913.797: 98.0699% ( 21) 00:07:34.835 13913.797 - 14014.622: 98.1790% ( 19) 00:07:34.835 14014.622 - 14115.446: 98.2594% ( 14) 00:07:34.835 14115.446 - 14216.271: 98.2939% ( 6) 00:07:34.835 14216.271 - 14317.095: 98.3226% ( 5) 00:07:34.835 14317.095 - 14417.920: 98.3571% ( 6) 00:07:34.835 14417.920 - 14518.745: 98.3858% ( 5) 00:07:34.835 14518.745 - 14619.569: 98.4145% ( 5) 00:07:34.835 14619.569 - 14720.394: 98.4432% ( 5) 00:07:34.835 14720.394 - 14821.218: 98.4777% ( 6) 00:07:34.835 14821.218 - 14922.043: 98.5064% ( 5) 00:07:34.835 14922.043 - 15022.868: 98.5294% ( 4) 00:07:34.835 15123.692 - 15224.517: 98.5581% ( 5) 00:07:34.835 15224.517 - 15325.342: 98.5926% ( 6) 00:07:34.835 15325.342 - 15426.166: 98.6213% ( 5) 00:07:34.835 15426.166 - 15526.991: 98.6960% ( 13) 00:07:34.835 15526.991 - 15627.815: 98.7534% ( 10) 00:07:34.835 15627.815 - 15728.640: 98.8109% ( 10) 00:07:34.835 15728.640 - 15829.465: 98.8741% ( 11) 00:07:34.835 15829.465 - 15930.289: 98.9775% ( 18) 00:07:34.835 15930.289 - 16031.114: 99.0694% ( 16) 00:07:34.835 16031.114 - 16131.938: 99.1670% ( 17) 00:07:34.835 16131.938 - 16232.763: 99.2647% ( 17) 00:07:34.835 16232.763 - 16333.588: 99.3336% ( 12) 00:07:34.835 16333.588 - 16434.412: 99.4083% ( 13) 00:07:34.835 16434.412 - 16535.237: 99.4715% ( 11) 00:07:34.835 16535.237 - 16636.062: 99.5347% ( 11) 00:07:34.835 16636.062 - 16736.886: 99.5749% ( 7) 00:07:34.835 16736.886 - 16837.711: 99.6094% ( 6) 00:07:34.835 16837.711 - 16938.535: 99.6324% ( 4) 00:07:34.835 23794.609 - 23895.434: 99.6496% ( 3) 00:07:34.835 23895.434 - 23996.258: 99.6841% ( 6) 00:07:34.835 23996.258 - 24097.083: 99.7185% ( 6) 00:07:34.835 24097.083 - 24197.908: 99.7530% ( 6) 00:07:34.835 24197.908 - 24298.732: 99.7875% ( 6) 00:07:34.835 24298.732 - 24399.557: 99.8219% ( 6) 00:07:34.835 24399.557 - 24500.382: 99.8506% ( 5) 00:07:34.835 24500.382 - 24601.206: 99.8909% ( 7) 00:07:34.835 24601.206 - 24702.031: 99.9253% ( 6) 00:07:34.835 24702.031 - 24802.855: 99.9540% ( 5) 00:07:34.835 24802.855 - 24903.680: 99.9943% ( 7) 00:07:34.835 24903.680 - 25004.505: 100.0000% ( 1) 00:07:34.835 00:07:34.835 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:34.835 ============================================================================== 00:07:34.835 Range in us Cumulative IO count 00:07:34.835 4385.871 - 4411.077: 0.0057% ( 1) 00:07:34.835 4411.077 - 4436.283: 0.0172% ( 2) 00:07:34.835 4436.283 - 4461.489: 0.0230% ( 1) 00:07:34.835 4461.489 - 4486.695: 0.0345% ( 2) 00:07:34.835 4486.695 - 4511.902: 0.0460% ( 2) 00:07:34.835 4537.108 - 4562.314: 0.0632% ( 3) 00:07:34.835 4562.314 - 4587.520: 0.0689% ( 1) 00:07:34.835 4587.520 - 4612.726: 0.0804% ( 2) 00:07:34.835 4612.726 - 4637.932: 0.0862% ( 1) 00:07:34.835 4637.932 - 4663.138: 0.0977% ( 2) 00:07:34.835 4663.138 - 4688.345: 0.1091% ( 2) 00:07:34.835 4688.345 - 4713.551: 0.1149% ( 1) 00:07:34.835 4713.551 - 4738.757: 0.1264% ( 2) 00:07:34.835 4738.757 - 4763.963: 0.1379% ( 2) 00:07:34.835 4763.963 - 4789.169: 0.1436% ( 1) 00:07:34.835 4789.169 - 4814.375: 0.1551% ( 2) 00:07:34.835 4814.375 - 4839.582: 0.1608% ( 1) 00:07:34.835 4839.582 - 4864.788: 0.1723% ( 2) 00:07:34.835 4864.788 - 4889.994: 0.1838% ( 2) 00:07:34.835 4889.994 - 4915.200: 0.1896% ( 1) 00:07:34.835 4915.200 - 4940.406: 0.2011% ( 2) 00:07:34.835 4940.406 - 4965.612: 0.2125% ( 2) 00:07:34.835 4965.612 - 4990.818: 0.2183% ( 1) 00:07:34.835 4990.818 - 5016.025: 0.2298% ( 2) 00:07:34.835 5016.025 - 5041.231: 0.2355% ( 1) 00:07:34.835 5041.231 - 5066.437: 0.2470% ( 2) 00:07:34.835 5066.437 - 5091.643: 0.2528% ( 1) 00:07:34.835 5091.643 - 5116.849: 0.2642% ( 2) 00:07:34.835 5116.849 - 5142.055: 0.2757% ( 2) 00:07:34.835 5142.055 - 5167.262: 0.2815% ( 1) 00:07:34.835 5167.262 - 5192.468: 0.2930% ( 2) 00:07:34.835 5192.468 - 5217.674: 0.2987% ( 1) 00:07:34.835 5217.674 - 5242.880: 0.3102% ( 2) 00:07:34.835 5242.880 - 5268.086: 0.3217% ( 2) 00:07:34.835 5268.086 - 5293.292: 0.3274% ( 1) 00:07:34.835 5293.292 - 5318.498: 0.3389% ( 2) 00:07:34.835 5318.498 - 5343.705: 0.3504% ( 2) 00:07:34.835 5343.705 - 5368.911: 0.3562% ( 1) 00:07:34.835 5368.911 - 5394.117: 0.3619% ( 1) 00:07:34.835 5394.117 - 5419.323: 0.3676% ( 1) 00:07:34.835 5671.385 - 5696.591: 0.3791% ( 2) 00:07:34.835 5696.591 - 5721.797: 0.4079% ( 5) 00:07:34.835 5721.797 - 5747.003: 0.4538% ( 8) 00:07:34.835 5747.003 - 5772.209: 0.5170% ( 11) 00:07:34.835 5772.209 - 5797.415: 0.6606% ( 25) 00:07:34.835 5797.415 - 5822.622: 0.8272% ( 29) 00:07:34.835 5822.622 - 5847.828: 1.2580% ( 75) 00:07:34.835 5847.828 - 5873.034: 1.8267% ( 99) 00:07:34.835 5873.034 - 5898.240: 2.7229% ( 156) 00:07:34.835 5898.240 - 5923.446: 3.6190% ( 156) 00:07:34.835 5923.446 - 5948.652: 4.5267% ( 158) 00:07:34.835 5948.652 - 5973.858: 5.4802% ( 166) 00:07:34.835 5973.858 - 5999.065: 6.5430% ( 185) 00:07:34.835 5999.065 - 6024.271: 7.6574% ( 194) 00:07:34.835 6024.271 - 6049.477: 8.8810% ( 213) 00:07:34.835 6049.477 - 6074.683: 9.9782% ( 191) 00:07:34.835 6074.683 - 6099.889: 11.1903% ( 211) 00:07:34.835 6099.889 - 6125.095: 12.3162% ( 196) 00:07:34.835 6125.095 - 6150.302: 13.5168% ( 209) 00:07:34.835 6150.302 - 6175.508: 14.7001% ( 206) 00:07:34.835 6175.508 - 6200.714: 15.8720% ( 204) 00:07:34.835 6200.714 - 6225.920: 17.0611% ( 207) 00:07:34.835 6225.920 - 6251.126: 18.1985% ( 198) 00:07:34.835 6251.126 - 6276.332: 19.4451% ( 217) 00:07:34.835 6276.332 - 6301.538: 20.6916% ( 217) 00:07:34.835 6301.538 - 6326.745: 21.9439% ( 218) 00:07:34.835 6326.745 - 6351.951: 23.2479% ( 227) 00:07:34.835 6351.951 - 6377.157: 24.6324% ( 241) 00:07:34.835 6377.157 - 6402.363: 26.2695% ( 285) 00:07:34.835 6402.363 - 6427.569: 27.9986% ( 301) 00:07:34.835 6427.569 - 6452.775: 29.7277% ( 301) 00:07:34.835 6452.775 - 6503.188: 33.3123% ( 624) 00:07:34.836 6503.188 - 6553.600: 37.0290% ( 647) 00:07:34.836 6553.600 - 6604.012: 40.7973% ( 656) 00:07:34.836 6604.012 - 6654.425: 44.4795% ( 641) 00:07:34.836 6654.425 - 6704.837: 47.7424% ( 568) 00:07:34.836 6704.837 - 6755.249: 50.6491% ( 506) 00:07:34.836 6755.249 - 6805.662: 53.3146% ( 464) 00:07:34.836 6805.662 - 6856.074: 55.6813% ( 412) 00:07:34.836 6856.074 - 6906.486: 57.7378% ( 358) 00:07:34.836 6906.486 - 6956.898: 59.9265% ( 381) 00:07:34.836 6956.898 - 7007.311: 62.1036% ( 379) 00:07:34.836 7007.311 - 7057.723: 64.1602% ( 358) 00:07:34.836 7057.723 - 7108.135: 66.2397% ( 362) 00:07:34.836 7108.135 - 7158.548: 68.4628% ( 387) 00:07:34.836 7158.548 - 7208.960: 70.4159% ( 340) 00:07:34.836 7208.960 - 7259.372: 72.0588% ( 286) 00:07:34.836 7259.372 - 7309.785: 73.3054% ( 217) 00:07:34.836 7309.785 - 7360.197: 74.4428% ( 198) 00:07:34.836 7360.197 - 7410.609: 75.3791% ( 163) 00:07:34.836 7410.609 - 7461.022: 76.3442% ( 168) 00:07:34.836 7461.022 - 7511.434: 77.2346% ( 155) 00:07:34.836 7511.434 - 7561.846: 78.0503% ( 142) 00:07:34.836 7561.846 - 7612.258: 78.7971% ( 130) 00:07:34.836 7612.258 - 7662.671: 79.5152% ( 125) 00:07:34.836 7662.671 - 7713.083: 80.1528% ( 111) 00:07:34.836 7713.083 - 7763.495: 80.7560% ( 105) 00:07:34.836 7763.495 - 7813.908: 81.3477% ( 103) 00:07:34.836 7813.908 - 7864.320: 81.8704% ( 91) 00:07:34.836 7864.320 - 7914.732: 82.4046% ( 93) 00:07:34.836 7914.732 - 7965.145: 82.8757% ( 82) 00:07:34.836 7965.145 - 8015.557: 83.2778% ( 70) 00:07:34.836 8015.557 - 8065.969: 83.7259% ( 78) 00:07:34.836 8065.969 - 8116.382: 84.1280% ( 70) 00:07:34.836 8116.382 - 8166.794: 84.5301% ( 70) 00:07:34.836 8166.794 - 8217.206: 84.9265% ( 69) 00:07:34.836 8217.206 - 8267.618: 85.2884% ( 63) 00:07:34.836 8267.618 - 8318.031: 85.6043% ( 55) 00:07:34.836 8318.031 - 8368.443: 85.9375% ( 58) 00:07:34.836 8368.443 - 8418.855: 86.2075% ( 47) 00:07:34.836 8418.855 - 8469.268: 86.5062% ( 52) 00:07:34.836 8469.268 - 8519.680: 86.8049% ( 52) 00:07:34.836 8519.680 - 8570.092: 87.0692% ( 46) 00:07:34.836 8570.092 - 8620.505: 87.2989% ( 40) 00:07:34.836 8620.505 - 8670.917: 87.5632% ( 46) 00:07:34.836 8670.917 - 8721.329: 87.8619% ( 52) 00:07:34.836 8721.329 - 8771.742: 88.1089% ( 43) 00:07:34.836 8771.742 - 8822.154: 88.3789% ( 47) 00:07:34.836 8822.154 - 8872.566: 88.6317% ( 44) 00:07:34.836 8872.566 - 8922.978: 88.8844% ( 44) 00:07:34.836 8922.978 - 8973.391: 89.1085% ( 39) 00:07:34.836 8973.391 - 9023.803: 89.3267% ( 38) 00:07:34.836 9023.803 - 9074.215: 89.5508% ( 39) 00:07:34.836 9074.215 - 9124.628: 89.7403% ( 33) 00:07:34.836 9124.628 - 9175.040: 89.9127% ( 30) 00:07:34.836 9175.040 - 9225.452: 90.0850% ( 30) 00:07:34.836 9225.452 - 9275.865: 90.2286% ( 25) 00:07:34.836 9275.865 - 9326.277: 90.3722% ( 25) 00:07:34.836 9326.277 - 9376.689: 90.5216% ( 26) 00:07:34.836 9376.689 - 9427.102: 90.6537% ( 23) 00:07:34.836 9427.102 - 9477.514: 90.7916% ( 24) 00:07:34.836 9477.514 - 9527.926: 90.9295% ( 24) 00:07:34.836 9527.926 - 9578.338: 91.0788% ( 26) 00:07:34.836 9578.338 - 9628.751: 91.2224% ( 25) 00:07:34.836 9628.751 - 9679.163: 91.3316% ( 19) 00:07:34.836 9679.163 - 9729.575: 91.4407% ( 19) 00:07:34.836 9729.575 - 9779.988: 91.5499% ( 19) 00:07:34.836 9779.988 - 9830.400: 91.6705% ( 21) 00:07:34.836 9830.400 - 9880.812: 91.7796% ( 19) 00:07:34.836 9880.812 - 9931.225: 91.8601% ( 14) 00:07:34.836 9931.225 - 9981.637: 91.9520% ( 16) 00:07:34.836 9981.637 - 10032.049: 92.0381% ( 15) 00:07:34.836 10032.049 - 10082.462: 92.1875% ( 26) 00:07:34.836 10082.462 - 10132.874: 92.2966% ( 19) 00:07:34.836 10132.874 - 10183.286: 92.4460% ( 26) 00:07:34.836 10183.286 - 10233.698: 92.5609% ( 20) 00:07:34.836 10233.698 - 10284.111: 92.6873% ( 22) 00:07:34.836 10284.111 - 10334.523: 92.8136% ( 22) 00:07:34.836 10334.523 - 10384.935: 92.9400% ( 22) 00:07:34.836 10384.935 - 10435.348: 93.0492% ( 19) 00:07:34.836 10435.348 - 10485.760: 93.1526% ( 18) 00:07:34.836 10485.760 - 10536.172: 93.2732% ( 21) 00:07:34.836 10536.172 - 10586.585: 93.4226% ( 26) 00:07:34.836 10586.585 - 10636.997: 93.5432% ( 21) 00:07:34.836 10636.997 - 10687.409: 93.6638% ( 21) 00:07:34.836 10687.409 - 10737.822: 93.8419% ( 31) 00:07:34.836 10737.822 - 10788.234: 94.0085% ( 29) 00:07:34.836 10788.234 - 10838.646: 94.1636% ( 27) 00:07:34.836 10838.646 - 10889.058: 94.3130% ( 26) 00:07:34.836 10889.058 - 10939.471: 94.4853% ( 30) 00:07:34.836 10939.471 - 10989.883: 94.6117% ( 22) 00:07:34.836 10989.883 - 11040.295: 94.7266% ( 20) 00:07:34.836 11040.295 - 11090.708: 94.8587% ( 23) 00:07:34.836 11090.708 - 11141.120: 94.9851% ( 22) 00:07:34.836 11141.120 - 11191.532: 95.1114% ( 22) 00:07:34.836 11191.532 - 11241.945: 95.2321% ( 21) 00:07:34.836 11241.945 - 11292.357: 95.3355% ( 18) 00:07:34.836 11292.357 - 11342.769: 95.4561% ( 21) 00:07:34.836 11342.769 - 11393.182: 95.5710% ( 20) 00:07:34.836 11393.182 - 11443.594: 95.6629% ( 16) 00:07:34.836 11443.594 - 11494.006: 95.7491% ( 15) 00:07:34.836 11494.006 - 11544.418: 95.8410% ( 16) 00:07:34.836 11544.418 - 11594.831: 95.9214% ( 14) 00:07:34.836 11594.831 - 11645.243: 96.0076% ( 15) 00:07:34.836 11645.243 - 11695.655: 96.0880% ( 14) 00:07:34.837 11695.655 - 11746.068: 96.1799% ( 16) 00:07:34.837 11746.068 - 11796.480: 96.2546% ( 13) 00:07:34.837 11796.480 - 11846.892: 96.3465% ( 16) 00:07:34.837 11846.892 - 11897.305: 96.4212% ( 13) 00:07:34.837 11897.305 - 11947.717: 96.5016% ( 14) 00:07:34.837 11947.717 - 11998.129: 96.5763% ( 13) 00:07:34.837 11998.129 - 12048.542: 96.6337% ( 10) 00:07:34.837 12048.542 - 12098.954: 96.6854% ( 9) 00:07:34.837 12098.954 - 12149.366: 96.7142% ( 5) 00:07:34.837 12149.366 - 12199.778: 96.7256% ( 2) 00:07:34.837 12199.778 - 12250.191: 96.7314% ( 1) 00:07:34.837 12250.191 - 12300.603: 96.7429% ( 2) 00:07:34.837 12300.603 - 12351.015: 96.7544% ( 2) 00:07:34.837 12351.015 - 12401.428: 96.7659% ( 2) 00:07:34.837 12401.428 - 12451.840: 96.7773% ( 2) 00:07:34.837 12451.840 - 12502.252: 96.8233% ( 8) 00:07:34.837 12502.252 - 12552.665: 96.8693% ( 8) 00:07:34.837 12552.665 - 12603.077: 96.8865% ( 3) 00:07:34.837 12603.077 - 12653.489: 96.9210% ( 6) 00:07:34.837 12653.489 - 12703.902: 96.9554% ( 6) 00:07:34.837 12703.902 - 12754.314: 97.0014% ( 8) 00:07:34.837 12754.314 - 12804.726: 97.0588% ( 10) 00:07:34.837 12804.726 - 12855.138: 97.1048% ( 8) 00:07:34.837 12855.138 - 12905.551: 97.1565% ( 9) 00:07:34.837 12905.551 - 13006.375: 97.2656% ( 19) 00:07:34.837 13006.375 - 13107.200: 97.3863% ( 21) 00:07:34.837 13107.200 - 13208.025: 97.5414% ( 27) 00:07:34.837 13208.025 - 13308.849: 97.6792% ( 24) 00:07:34.837 13308.849 - 13409.674: 97.7769% ( 17) 00:07:34.837 13409.674 - 13510.498: 97.8745% ( 17) 00:07:34.837 13510.498 - 13611.323: 97.9607% ( 15) 00:07:34.837 13611.323 - 13712.148: 98.0641% ( 18) 00:07:34.837 13712.148 - 13812.972: 98.1675% ( 18) 00:07:34.837 13812.972 - 13913.797: 98.2652% ( 17) 00:07:34.837 13913.797 - 14014.622: 98.3686% ( 18) 00:07:34.837 14014.622 - 14115.446: 98.4605% ( 16) 00:07:34.837 14115.446 - 14216.271: 98.5007% ( 7) 00:07:34.837 14216.271 - 14317.095: 98.5294% ( 5) 00:07:34.837 15426.166 - 15526.991: 98.5409% ( 2) 00:07:34.837 15526.991 - 15627.815: 98.5639% ( 4) 00:07:34.837 15627.815 - 15728.640: 98.5983% ( 6) 00:07:34.837 15728.640 - 15829.465: 98.6328% ( 6) 00:07:34.837 15829.465 - 15930.289: 98.6788% ( 8) 00:07:34.837 15930.289 - 16031.114: 98.7592% ( 14) 00:07:34.837 16031.114 - 16131.938: 98.8568% ( 17) 00:07:34.837 16131.938 - 16232.763: 98.9545% ( 17) 00:07:34.837 16232.763 - 16333.588: 99.0407% ( 15) 00:07:34.837 16333.588 - 16434.412: 99.1268% ( 15) 00:07:34.837 16434.412 - 16535.237: 99.2302% ( 18) 00:07:34.837 16535.237 - 16636.062: 99.3336% ( 18) 00:07:34.837 16636.062 - 16736.886: 99.4256% ( 16) 00:07:34.837 16736.886 - 16837.711: 99.4945% ( 12) 00:07:34.837 16837.711 - 16938.535: 99.5519% ( 10) 00:07:34.837 16938.535 - 17039.360: 99.5807% ( 5) 00:07:34.837 17039.360 - 17140.185: 99.5979% ( 3) 00:07:34.837 17140.185 - 17241.009: 99.6151% ( 3) 00:07:34.837 17241.009 - 17341.834: 99.6324% ( 3) 00:07:34.837 24500.382 - 24601.206: 99.6438% ( 2) 00:07:34.837 24601.206 - 24702.031: 99.6783% ( 6) 00:07:34.837 24702.031 - 24802.855: 99.7070% ( 5) 00:07:34.837 24802.855 - 24903.680: 99.7472% ( 7) 00:07:34.837 24903.680 - 25004.505: 99.7760% ( 5) 00:07:34.837 25004.505 - 25105.329: 99.8104% ( 6) 00:07:34.837 25105.329 - 25206.154: 99.8449% ( 6) 00:07:34.837 25206.154 - 25306.978: 99.8794% ( 6) 00:07:34.837 25306.978 - 25407.803: 99.9138% ( 6) 00:07:34.837 25407.803 - 25508.628: 99.9483% ( 6) 00:07:34.837 25508.628 - 25609.452: 99.9770% ( 5) 00:07:34.837 25609.452 - 25710.277: 100.0000% ( 4) 00:07:34.837 00:07:34.837 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:34.837 ============================================================================== 00:07:34.837 Range in us Cumulative IO count 00:07:34.837 3831.335 - 3856.542: 0.0115% ( 2) 00:07:34.837 3856.542 - 3881.748: 0.0172% ( 1) 00:07:34.837 3881.748 - 3906.954: 0.0287% ( 2) 00:07:34.837 3906.954 - 3932.160: 0.0345% ( 1) 00:07:34.837 3932.160 - 3957.366: 0.0460% ( 2) 00:07:34.837 3957.366 - 3982.572: 0.0517% ( 1) 00:07:34.837 3982.572 - 4007.778: 0.0574% ( 1) 00:07:34.837 4007.778 - 4032.985: 0.0689% ( 2) 00:07:34.837 4032.985 - 4058.191: 0.0804% ( 2) 00:07:34.837 4058.191 - 4083.397: 0.0919% ( 2) 00:07:34.837 4083.397 - 4108.603: 0.1091% ( 3) 00:07:34.837 4108.603 - 4133.809: 0.1149% ( 1) 00:07:34.837 4133.809 - 4159.015: 0.1264% ( 2) 00:07:34.837 4159.015 - 4184.222: 0.1379% ( 2) 00:07:34.837 4184.222 - 4209.428: 0.1494% ( 2) 00:07:34.837 4209.428 - 4234.634: 0.1608% ( 2) 00:07:34.837 4234.634 - 4259.840: 0.1666% ( 1) 00:07:34.837 4259.840 - 4285.046: 0.1781% ( 2) 00:07:34.837 4285.046 - 4310.252: 0.1896% ( 2) 00:07:34.837 4310.252 - 4335.458: 0.1953% ( 1) 00:07:34.837 4335.458 - 4360.665: 0.2068% ( 2) 00:07:34.837 4360.665 - 4385.871: 0.2125% ( 1) 00:07:34.837 4385.871 - 4411.077: 0.2240% ( 2) 00:07:34.837 4411.077 - 4436.283: 0.2355% ( 2) 00:07:34.837 4436.283 - 4461.489: 0.2413% ( 1) 00:07:34.837 4461.489 - 4486.695: 0.2528% ( 2) 00:07:34.837 4486.695 - 4511.902: 0.2585% ( 1) 00:07:34.837 4511.902 - 4537.108: 0.2700% ( 2) 00:07:34.837 4537.108 - 4562.314: 0.2757% ( 1) 00:07:34.837 4562.314 - 4587.520: 0.2872% ( 2) 00:07:34.837 4587.520 - 4612.726: 0.2987% ( 2) 00:07:34.837 4612.726 - 4637.932: 0.3045% ( 1) 00:07:34.837 4637.932 - 4663.138: 0.3159% ( 2) 00:07:34.837 4663.138 - 4688.345: 0.3217% ( 1) 00:07:34.837 4688.345 - 4713.551: 0.3274% ( 1) 00:07:34.837 4713.551 - 4738.757: 0.3389% ( 2) 00:07:34.837 4789.169 - 4814.375: 0.3504% ( 2) 00:07:34.837 4814.375 - 4839.582: 0.3562% ( 1) 00:07:34.837 4839.582 - 4864.788: 0.3619% ( 1) 00:07:34.837 4864.788 - 4889.994: 0.3676% ( 1) 00:07:34.837 5671.385 - 5696.591: 0.3791% ( 2) 00:07:34.837 5696.591 - 5721.797: 0.3906% ( 2) 00:07:34.837 5721.797 - 5747.003: 0.4251% ( 6) 00:07:34.837 5747.003 - 5772.209: 0.4768% ( 9) 00:07:34.837 5772.209 - 5797.415: 0.6664% ( 33) 00:07:34.837 5797.415 - 5822.622: 0.9995% ( 58) 00:07:34.837 5822.622 - 5847.828: 1.4304% ( 75) 00:07:34.837 5847.828 - 5873.034: 1.9818% ( 96) 00:07:34.837 5873.034 - 5898.240: 2.6137% ( 110) 00:07:34.837 5898.240 - 5923.446: 3.4065% ( 138) 00:07:34.837 5923.446 - 5948.652: 4.4520% ( 182) 00:07:34.837 5948.652 - 5973.858: 5.4630% ( 176) 00:07:34.837 5973.858 - 5999.065: 6.5315% ( 186) 00:07:34.837 5999.065 - 6024.271: 7.6746% ( 199) 00:07:34.837 6024.271 - 6049.477: 8.7833% ( 193) 00:07:34.837 6049.477 - 6074.683: 9.8920% ( 193) 00:07:34.837 6074.683 - 6099.889: 11.0581% ( 203) 00:07:34.837 6099.889 - 6125.095: 12.2128% ( 201) 00:07:34.837 6125.095 - 6150.302: 13.3674% ( 201) 00:07:34.837 6150.302 - 6175.508: 14.4991% ( 197) 00:07:34.837 6175.508 - 6200.714: 15.6422% ( 199) 00:07:34.837 6200.714 - 6225.920: 16.8313% ( 207) 00:07:34.837 6225.920 - 6251.126: 18.0032% ( 204) 00:07:34.837 6251.126 - 6276.332: 19.2440% ( 216) 00:07:34.837 6276.332 - 6301.538: 20.4159% ( 204) 00:07:34.837 6301.538 - 6326.745: 21.6682% ( 218) 00:07:34.837 6326.745 - 6351.951: 23.0986% ( 249) 00:07:34.838 6351.951 - 6377.157: 24.5921% ( 260) 00:07:34.838 6377.157 - 6402.363: 26.1604% ( 273) 00:07:34.838 6402.363 - 6427.569: 27.9067% ( 304) 00:07:34.838 6427.569 - 6452.775: 29.6703% ( 307) 00:07:34.838 6452.775 - 6503.188: 33.1284% ( 602) 00:07:34.838 6503.188 - 6553.600: 36.8796% ( 653) 00:07:34.838 6553.600 - 6604.012: 40.6020% ( 648) 00:07:34.838 6604.012 - 6654.425: 44.3532% ( 653) 00:07:34.838 6654.425 - 6704.837: 47.7022% ( 583) 00:07:34.838 6704.837 - 6755.249: 50.7066% ( 523) 00:07:34.838 6755.249 - 6805.662: 53.2973% ( 451) 00:07:34.838 6805.662 - 6856.074: 55.5951% ( 400) 00:07:34.838 6856.074 - 6906.486: 57.8125% ( 386) 00:07:34.838 6906.486 - 6956.898: 59.9609% ( 374) 00:07:34.838 6956.898 - 7007.311: 62.1094% ( 374) 00:07:34.838 7007.311 - 7057.723: 64.1946% ( 363) 00:07:34.838 7057.723 - 7108.135: 66.3028% ( 367) 00:07:34.838 7108.135 - 7158.548: 68.4341% ( 371) 00:07:34.838 7158.548 - 7208.960: 70.4159% ( 345) 00:07:34.838 7208.960 - 7259.372: 72.0301% ( 281) 00:07:34.838 7259.372 - 7309.785: 73.3284% ( 226) 00:07:34.838 7309.785 - 7360.197: 74.5175% ( 207) 00:07:34.838 7360.197 - 7410.609: 75.5227% ( 175) 00:07:34.838 7410.609 - 7461.022: 76.4189% ( 156) 00:07:34.838 7461.022 - 7511.434: 77.2978% ( 153) 00:07:34.838 7511.434 - 7561.846: 78.0848% ( 137) 00:07:34.838 7561.846 - 7612.258: 78.8775% ( 138) 00:07:34.838 7612.258 - 7662.671: 79.6013% ( 126) 00:07:34.838 7662.671 - 7713.083: 80.2562% ( 114) 00:07:34.838 7713.083 - 7763.495: 80.8421% ( 102) 00:07:34.838 7763.495 - 7813.908: 81.3936% ( 96) 00:07:34.838 7813.908 - 7864.320: 81.9508% ( 97) 00:07:34.838 7864.320 - 7914.732: 82.4219% ( 82) 00:07:34.838 7914.732 - 7965.145: 82.9331% ( 89) 00:07:34.838 7965.145 - 8015.557: 83.4214% ( 85) 00:07:34.838 8015.557 - 8065.969: 83.8810% ( 80) 00:07:34.838 8065.969 - 8116.382: 84.2946% ( 72) 00:07:34.838 8116.382 - 8166.794: 84.7254% ( 75) 00:07:34.838 8166.794 - 8217.206: 85.0873% ( 63) 00:07:34.838 8217.206 - 8267.618: 85.4377% ( 61) 00:07:34.838 8267.618 - 8318.031: 85.8054% ( 64) 00:07:34.838 8318.031 - 8368.443: 86.1271% ( 56) 00:07:34.838 8368.443 - 8418.855: 86.4258% ( 52) 00:07:34.838 8418.855 - 8469.268: 86.7130% ( 50) 00:07:34.838 8469.268 - 8519.680: 86.9830% ( 47) 00:07:34.838 8519.680 - 8570.092: 87.2989% ( 55) 00:07:34.838 8570.092 - 8620.505: 87.5460% ( 43) 00:07:34.838 8620.505 - 8670.917: 87.7987% ( 44) 00:07:34.838 8670.917 - 8721.329: 88.0572% ( 45) 00:07:34.838 8721.329 - 8771.742: 88.3042% ( 43) 00:07:34.838 8771.742 - 8822.154: 88.5340% ( 40) 00:07:34.838 8822.154 - 8872.566: 88.7178% ( 32) 00:07:34.838 8872.566 - 8922.978: 88.9361% ( 38) 00:07:34.838 8922.978 - 8973.391: 89.1372% ( 35) 00:07:34.838 8973.391 - 9023.803: 89.3038% ( 29) 00:07:34.838 9023.803 - 9074.215: 89.5048% ( 35) 00:07:34.838 9074.215 - 9124.628: 89.6714% ( 29) 00:07:34.838 9124.628 - 9175.040: 89.8380% ( 29) 00:07:34.838 9175.040 - 9225.452: 90.0103% ( 30) 00:07:34.838 9225.452 - 9275.865: 90.1540% ( 25) 00:07:34.838 9275.865 - 9326.277: 90.3148% ( 28) 00:07:34.838 9326.277 - 9376.689: 90.4814% ( 29) 00:07:34.838 9376.689 - 9427.102: 90.6193% ( 24) 00:07:34.838 9427.102 - 9477.514: 90.7629% ( 25) 00:07:34.838 9477.514 - 9527.926: 90.9065% ( 25) 00:07:34.838 9527.926 - 9578.338: 91.0041% ( 17) 00:07:34.838 9578.338 - 9628.751: 91.1190% ( 20) 00:07:34.838 9628.751 - 9679.163: 91.2052% ( 15) 00:07:34.838 9679.163 - 9729.575: 91.3316% ( 22) 00:07:34.838 9729.575 - 9779.988: 91.4350% ( 18) 00:07:34.838 9779.988 - 9830.400: 91.5384% ( 18) 00:07:34.838 9830.400 - 9880.812: 91.6131% ( 13) 00:07:34.838 9880.812 - 9931.225: 91.6820% ( 12) 00:07:34.838 9931.225 - 9981.637: 91.7567% ( 13) 00:07:34.838 9981.637 - 10032.049: 91.8428% ( 15) 00:07:34.838 10032.049 - 10082.462: 91.9864% ( 25) 00:07:34.838 10082.462 - 10132.874: 92.0956% ( 19) 00:07:34.838 10132.874 - 10183.286: 92.2449% ( 26) 00:07:34.838 10183.286 - 10233.698: 92.3656% ( 21) 00:07:34.838 10233.698 - 10284.111: 92.5034% ( 24) 00:07:34.838 10284.111 - 10334.523: 92.6528% ( 26) 00:07:34.838 10334.523 - 10384.935: 92.7964% ( 25) 00:07:34.838 10384.935 - 10435.348: 92.9400% ( 25) 00:07:34.838 10435.348 - 10485.760: 93.0664% ( 22) 00:07:34.838 10485.760 - 10536.172: 93.2100% ( 25) 00:07:34.838 10536.172 - 10586.585: 93.3421% ( 23) 00:07:34.838 10586.585 - 10636.997: 93.4800% ( 24) 00:07:34.838 10636.997 - 10687.409: 93.6351% ( 27) 00:07:34.838 10687.409 - 10737.822: 93.8419% ( 36) 00:07:34.838 10737.822 - 10788.234: 93.9970% ( 27) 00:07:34.838 10788.234 - 10838.646: 94.1521% ( 27) 00:07:34.838 10838.646 - 10889.058: 94.2900% ( 24) 00:07:34.838 10889.058 - 10939.471: 94.4106% ( 21) 00:07:34.838 10939.471 - 10989.883: 94.5542% ( 25) 00:07:34.838 10989.883 - 11040.295: 94.7093% ( 27) 00:07:34.838 11040.295 - 11090.708: 94.8300% ( 21) 00:07:34.838 11090.708 - 11141.120: 94.9506% ( 21) 00:07:34.838 11141.120 - 11191.532: 95.0770% ( 22) 00:07:34.838 11191.532 - 11241.945: 95.1919% ( 20) 00:07:34.838 11241.945 - 11292.357: 95.3412% ( 26) 00:07:34.838 11292.357 - 11342.769: 95.4619% ( 21) 00:07:34.838 11342.769 - 11393.182: 95.5882% ( 22) 00:07:34.838 11393.182 - 11443.594: 95.6744% ( 15) 00:07:34.838 11443.594 - 11494.006: 95.7548% ( 14) 00:07:34.838 11494.006 - 11544.418: 95.8525% ( 17) 00:07:34.838 11544.418 - 11594.831: 95.9616% ( 19) 00:07:34.838 11594.831 - 11645.243: 96.0420% ( 14) 00:07:34.838 11645.243 - 11695.655: 96.1110% ( 12) 00:07:34.838 11695.655 - 11746.068: 96.1914% ( 14) 00:07:34.838 11746.068 - 11796.480: 96.2718% ( 14) 00:07:34.838 11796.480 - 11846.892: 96.3350% ( 11) 00:07:34.838 11846.892 - 11897.305: 96.3867% ( 9) 00:07:34.838 11897.305 - 11947.717: 96.4327% ( 8) 00:07:34.838 11947.717 - 11998.129: 96.4671% ( 6) 00:07:34.838 11998.129 - 12048.542: 96.5131% ( 8) 00:07:34.838 12048.542 - 12098.954: 96.5476% ( 6) 00:07:34.838 12098.954 - 12149.366: 96.5935% ( 8) 00:07:34.838 12149.366 - 12199.778: 96.6165% ( 4) 00:07:34.838 12199.778 - 12250.191: 96.6567% ( 7) 00:07:34.838 12250.191 - 12300.603: 96.6854% ( 5) 00:07:34.838 12300.603 - 12351.015: 96.7142% ( 5) 00:07:34.838 12351.015 - 12401.428: 96.7544% ( 7) 00:07:34.838 12401.428 - 12451.840: 96.8290% ( 13) 00:07:34.838 12451.840 - 12502.252: 96.8922% ( 11) 00:07:34.838 12502.252 - 12552.665: 96.9497% ( 10) 00:07:34.838 12552.665 - 12603.077: 97.0071% ( 10) 00:07:34.838 12603.077 - 12653.489: 97.0703% ( 11) 00:07:34.838 12653.489 - 12703.902: 97.1278% ( 10) 00:07:34.838 12703.902 - 12754.314: 97.2024% ( 13) 00:07:34.838 12754.314 - 12804.726: 97.2771% ( 13) 00:07:34.838 12804.726 - 12855.138: 97.3518% ( 13) 00:07:34.838 12855.138 - 12905.551: 97.4265% ( 13) 00:07:34.838 12905.551 - 13006.375: 97.5758% ( 26) 00:07:34.838 13006.375 - 13107.200: 97.7252% ( 26) 00:07:34.839 13107.200 - 13208.025: 97.8803% ( 27) 00:07:34.839 13208.025 - 13308.849: 98.0699% ( 33) 00:07:34.839 13308.849 - 13409.674: 98.1618% ( 16) 00:07:34.839 13409.674 - 13510.498: 98.2422% ( 14) 00:07:34.839 13510.498 - 13611.323: 98.3169% ( 13) 00:07:34.839 13611.323 - 13712.148: 98.4030% ( 15) 00:07:34.839 13712.148 - 13812.972: 98.4432% ( 7) 00:07:34.839 13812.972 - 13913.797: 98.4835% ( 7) 00:07:34.839 13913.797 - 14014.622: 98.5294% ( 8) 00:07:34.839 15325.342 - 15426.166: 98.5352% ( 1) 00:07:34.839 15426.166 - 15526.991: 98.5696% ( 6) 00:07:34.839 15526.991 - 15627.815: 98.6041% ( 6) 00:07:34.839 15627.815 - 15728.640: 98.6386% ( 6) 00:07:34.839 15728.640 - 15829.465: 98.6673% ( 5) 00:07:34.839 15829.465 - 15930.289: 98.7017% ( 6) 00:07:34.839 15930.289 - 16031.114: 98.7764% ( 13) 00:07:34.839 16031.114 - 16131.938: 98.8683% ( 16) 00:07:34.839 16131.938 - 16232.763: 98.9832% ( 20) 00:07:34.839 16232.763 - 16333.588: 99.0866% ( 18) 00:07:34.839 16333.588 - 16434.412: 99.1900% ( 18) 00:07:34.839 16434.412 - 16535.237: 99.2705% ( 14) 00:07:34.839 16535.237 - 16636.062: 99.3394% ( 12) 00:07:34.839 16636.062 - 16736.886: 99.4083% ( 12) 00:07:34.839 16736.886 - 16837.711: 99.4773% ( 12) 00:07:34.839 16837.711 - 16938.535: 99.5347% ( 10) 00:07:34.839 16938.535 - 17039.360: 99.5921% ( 10) 00:07:34.839 17039.360 - 17140.185: 99.6324% ( 7) 00:07:34.839 24399.557 - 24500.382: 99.6611% ( 5) 00:07:34.839 24500.382 - 24601.206: 99.6955% ( 6) 00:07:34.839 24601.206 - 24702.031: 99.7300% ( 6) 00:07:34.839 24702.031 - 24802.855: 99.7645% ( 6) 00:07:34.839 24802.855 - 24903.680: 99.7932% ( 5) 00:07:34.839 24903.680 - 25004.505: 99.8277% ( 6) 00:07:34.839 25004.505 - 25105.329: 99.8564% ( 5) 00:07:34.839 25105.329 - 25206.154: 99.8909% ( 6) 00:07:34.839 25206.154 - 25306.978: 99.9253% ( 6) 00:07:34.839 25306.978 - 25407.803: 99.9598% ( 6) 00:07:34.839 25407.803 - 25508.628: 99.9943% ( 6) 00:07:34.839 25508.628 - 25609.452: 100.0000% ( 1) 00:07:34.839 00:07:34.839 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:34.839 ============================================================================== 00:07:34.839 Range in us Cumulative IO count 00:07:34.839 3327.212 - 3352.418: 0.0287% ( 5) 00:07:34.839 3352.418 - 3377.625: 0.0345% ( 1) 00:07:34.839 3377.625 - 3402.831: 0.0460% ( 2) 00:07:34.839 3402.831 - 3428.037: 0.0517% ( 1) 00:07:34.839 3428.037 - 3453.243: 0.0574% ( 1) 00:07:34.839 3453.243 - 3478.449: 0.0689% ( 2) 00:07:34.839 3478.449 - 3503.655: 0.0747% ( 1) 00:07:34.839 3503.655 - 3528.862: 0.0862% ( 2) 00:07:34.839 3528.862 - 3554.068: 0.0919% ( 1) 00:07:34.839 3554.068 - 3579.274: 0.0977% ( 1) 00:07:34.839 3579.274 - 3604.480: 0.1149% ( 3) 00:07:34.839 3604.480 - 3629.686: 0.1264% ( 2) 00:07:34.839 3629.686 - 3654.892: 0.1321% ( 1) 00:07:34.839 3654.892 - 3680.098: 0.1436% ( 2) 00:07:34.839 3680.098 - 3705.305: 0.1494% ( 1) 00:07:34.839 3705.305 - 3730.511: 0.1551% ( 1) 00:07:34.839 3730.511 - 3755.717: 0.1666% ( 2) 00:07:34.839 3755.717 - 3780.923: 0.1781% ( 2) 00:07:34.839 3780.923 - 3806.129: 0.1838% ( 1) 00:07:34.839 3806.129 - 3831.335: 0.1953% ( 2) 00:07:34.839 3831.335 - 3856.542: 0.2011% ( 1) 00:07:34.839 3856.542 - 3881.748: 0.2125% ( 2) 00:07:34.839 3881.748 - 3906.954: 0.2240% ( 2) 00:07:34.839 3906.954 - 3932.160: 0.2298% ( 1) 00:07:34.839 3932.160 - 3957.366: 0.2413% ( 2) 00:07:34.839 3957.366 - 3982.572: 0.2470% ( 1) 00:07:34.839 3982.572 - 4007.778: 0.2585% ( 2) 00:07:34.839 4007.778 - 4032.985: 0.2700% ( 2) 00:07:34.839 4032.985 - 4058.191: 0.2757% ( 1) 00:07:34.839 4058.191 - 4083.397: 0.2872% ( 2) 00:07:34.839 4083.397 - 4108.603: 0.2930% ( 1) 00:07:34.839 4108.603 - 4133.809: 0.3045% ( 2) 00:07:34.839 4133.809 - 4159.015: 0.3102% ( 1) 00:07:34.839 4159.015 - 4184.222: 0.3217% ( 2) 00:07:34.839 4184.222 - 4209.428: 0.3332% ( 2) 00:07:34.839 4209.428 - 4234.634: 0.3389% ( 1) 00:07:34.839 4234.634 - 4259.840: 0.3504% ( 2) 00:07:34.839 4259.840 - 4285.046: 0.3562% ( 1) 00:07:34.839 4285.046 - 4310.252: 0.3619% ( 1) 00:07:34.839 4310.252 - 4335.458: 0.3676% ( 1) 00:07:34.839 5671.385 - 5696.591: 0.3791% ( 2) 00:07:34.839 5696.591 - 5721.797: 0.4021% ( 4) 00:07:34.839 5721.797 - 5747.003: 0.4308% ( 5) 00:07:34.839 5747.003 - 5772.209: 0.4940% ( 11) 00:07:34.839 5772.209 - 5797.415: 0.6032% ( 19) 00:07:34.839 5797.415 - 5822.622: 0.9249% ( 56) 00:07:34.839 5822.622 - 5847.828: 1.2868% ( 63) 00:07:34.839 5847.828 - 5873.034: 1.8727% ( 102) 00:07:34.839 5873.034 - 5898.240: 2.5678% ( 121) 00:07:34.839 5898.240 - 5923.446: 3.3375% ( 134) 00:07:34.839 5923.446 - 5948.652: 4.3830% ( 182) 00:07:34.839 5948.652 - 5973.858: 5.4802% ( 191) 00:07:34.839 5973.858 - 5999.065: 6.5832% ( 192) 00:07:34.839 5999.065 - 6024.271: 7.6574% ( 187) 00:07:34.839 6024.271 - 6049.477: 8.7316% ( 187) 00:07:34.839 6049.477 - 6074.683: 9.8346% ( 192) 00:07:34.839 6074.683 - 6099.889: 10.9835% ( 200) 00:07:34.839 6099.889 - 6125.095: 12.1266% ( 199) 00:07:34.839 6125.095 - 6150.302: 13.3042% ( 205) 00:07:34.839 6150.302 - 6175.508: 14.4876% ( 206) 00:07:34.839 6175.508 - 6200.714: 15.6652% ( 205) 00:07:34.839 6200.714 - 6225.920: 16.8658% ( 209) 00:07:34.839 6225.920 - 6251.126: 18.0607% ( 208) 00:07:34.839 6251.126 - 6276.332: 19.2613% ( 209) 00:07:34.839 6276.332 - 6301.538: 20.4561% ( 208) 00:07:34.839 6301.538 - 6326.745: 21.6854% ( 214) 00:07:34.839 6326.745 - 6351.951: 23.0354% ( 235) 00:07:34.839 6351.951 - 6377.157: 24.5060% ( 256) 00:07:34.839 6377.157 - 6402.363: 26.1259% ( 282) 00:07:34.839 6402.363 - 6427.569: 27.8148% ( 294) 00:07:34.839 6427.569 - 6452.775: 29.5496% ( 302) 00:07:34.839 6452.775 - 6503.188: 33.0423% ( 608) 00:07:34.839 6503.188 - 6553.600: 36.8451% ( 662) 00:07:34.839 6553.600 - 6604.012: 40.6078% ( 655) 00:07:34.839 6604.012 - 6654.425: 44.4049% ( 661) 00:07:34.839 6654.425 - 6704.837: 47.7597% ( 584) 00:07:34.839 6704.837 - 6755.249: 50.7985% ( 529) 00:07:34.839 6755.249 - 6805.662: 53.3835% ( 450) 00:07:34.839 6805.662 - 6856.074: 55.6411% ( 393) 00:07:34.839 6856.074 - 6906.486: 57.7953% ( 375) 00:07:34.839 6906.486 - 6956.898: 59.9609% ( 377) 00:07:34.839 6956.898 - 7007.311: 62.1094% ( 374) 00:07:34.839 7007.311 - 7057.723: 64.2463% ( 372) 00:07:34.839 7057.723 - 7108.135: 66.3028% ( 358) 00:07:34.839 7108.135 - 7158.548: 68.4398% ( 372) 00:07:34.839 7158.548 - 7208.960: 70.5136% ( 361) 00:07:34.839 7208.960 - 7259.372: 72.2254% ( 298) 00:07:34.839 7259.372 - 7309.785: 73.6328% ( 245) 00:07:34.839 7309.785 - 7360.197: 74.7932% ( 202) 00:07:34.840 7360.197 - 7410.609: 75.8444% ( 183) 00:07:34.840 7410.609 - 7461.022: 76.8670% ( 178) 00:07:34.840 7461.022 - 7511.434: 77.7114% ( 147) 00:07:34.840 7511.434 - 7561.846: 78.5329% ( 143) 00:07:34.840 7561.846 - 7612.258: 79.2337% ( 122) 00:07:34.840 7612.258 - 7662.671: 79.9115% ( 118) 00:07:34.840 7662.671 - 7713.083: 80.5377% ( 109) 00:07:34.840 7713.083 - 7763.495: 81.1466% ( 106) 00:07:34.840 7763.495 - 7813.908: 81.6866% ( 94) 00:07:34.840 7813.908 - 7864.320: 82.1749% ( 85) 00:07:34.840 7864.320 - 7914.732: 82.6976% ( 91) 00:07:34.840 7914.732 - 7965.145: 83.2089% ( 89) 00:07:34.840 7965.145 - 8015.557: 83.6972% ( 85) 00:07:34.840 8015.557 - 8065.969: 84.1165% ( 73) 00:07:34.840 8065.969 - 8116.382: 84.4784% ( 63) 00:07:34.840 8116.382 - 8166.794: 84.8116% ( 58) 00:07:34.840 8166.794 - 8217.206: 85.1275% ( 55) 00:07:34.840 8217.206 - 8267.618: 85.4492% ( 56) 00:07:34.840 8267.618 - 8318.031: 85.7594% ( 54) 00:07:34.840 8318.031 - 8368.443: 86.0237% ( 46) 00:07:34.840 8368.443 - 8418.855: 86.2994% ( 48) 00:07:34.840 8418.855 - 8469.268: 86.5751% ( 48) 00:07:34.840 8469.268 - 8519.680: 86.8107% ( 41) 00:07:34.840 8519.680 - 8570.092: 87.0175% ( 36) 00:07:34.840 8570.092 - 8620.505: 87.3047% ( 50) 00:07:34.840 8620.505 - 8670.917: 87.5689% ( 46) 00:07:34.840 8670.917 - 8721.329: 87.8217% ( 44) 00:07:34.840 8721.329 - 8771.742: 88.0457% ( 39) 00:07:34.840 8771.742 - 8822.154: 88.2583% ( 37) 00:07:34.840 8822.154 - 8872.566: 88.5225% ( 46) 00:07:34.840 8872.566 - 8922.978: 88.7063% ( 32) 00:07:34.840 8922.978 - 8973.391: 88.9131% ( 36) 00:07:34.840 8973.391 - 9023.803: 89.0855% ( 30) 00:07:34.840 9023.803 - 9074.215: 89.3095% ( 39) 00:07:34.840 9074.215 - 9124.628: 89.4933% ( 32) 00:07:34.840 9124.628 - 9175.040: 89.6657% ( 30) 00:07:34.840 9175.040 - 9225.452: 89.8610% ( 34) 00:07:34.840 9225.452 - 9275.865: 90.0620% ( 35) 00:07:34.840 9275.865 - 9326.277: 90.2057% ( 25) 00:07:34.840 9326.277 - 9376.689: 90.3722% ( 29) 00:07:34.840 9376.689 - 9427.102: 90.5159% ( 25) 00:07:34.840 9427.102 - 9477.514: 90.6652% ( 26) 00:07:34.840 9477.514 - 9527.926: 90.8031% ( 24) 00:07:34.840 9527.926 - 9578.338: 90.9065% ( 18) 00:07:34.840 9578.338 - 9628.751: 91.0041% ( 17) 00:07:34.840 9628.751 - 9679.163: 91.0960% ( 16) 00:07:34.840 9679.163 - 9729.575: 91.2109% ( 20) 00:07:34.840 9729.575 - 9779.988: 91.3948% ( 32) 00:07:34.840 9779.988 - 9830.400: 91.5384% ( 25) 00:07:34.840 9830.400 - 9880.812: 91.6648% ( 22) 00:07:34.840 9880.812 - 9931.225: 91.7567% ( 16) 00:07:34.840 9931.225 - 9981.637: 91.8543% ( 17) 00:07:34.840 9981.637 - 10032.049: 91.9807% ( 22) 00:07:34.840 10032.049 - 10082.462: 92.0841% ( 18) 00:07:34.840 10082.462 - 10132.874: 92.1932% ( 19) 00:07:34.840 10132.874 - 10183.286: 92.3254% ( 23) 00:07:34.840 10183.286 - 10233.698: 92.4920% ( 29) 00:07:34.840 10233.698 - 10284.111: 92.6700% ( 31) 00:07:34.840 10284.111 - 10334.523: 92.8309% ( 28) 00:07:34.840 10334.523 - 10384.935: 92.9630% ( 23) 00:07:34.840 10384.935 - 10435.348: 93.0951% ( 23) 00:07:34.840 10435.348 - 10485.760: 93.2617% ( 29) 00:07:34.840 10485.760 - 10536.172: 93.4283% ( 29) 00:07:34.840 10536.172 - 10586.585: 93.6006% ( 30) 00:07:34.840 10586.585 - 10636.997: 93.7500% ( 26) 00:07:34.840 10636.997 - 10687.409: 93.8994% ( 26) 00:07:34.840 10687.409 - 10737.822: 94.0487% ( 26) 00:07:34.840 10737.822 - 10788.234: 94.1693% ( 21) 00:07:34.840 10788.234 - 10838.646: 94.3015% ( 23) 00:07:34.840 10838.646 - 10889.058: 94.4049% ( 18) 00:07:34.840 10889.058 - 10939.471: 94.5485% ( 25) 00:07:34.840 10939.471 - 10989.883: 94.6691% ( 21) 00:07:34.840 10989.883 - 11040.295: 94.7725% ( 18) 00:07:34.840 11040.295 - 11090.708: 94.8817% ( 19) 00:07:34.840 11090.708 - 11141.120: 94.9678% ( 15) 00:07:34.840 11141.120 - 11191.532: 95.0368% ( 12) 00:07:34.840 11191.532 - 11241.945: 95.1000% ( 11) 00:07:34.840 11241.945 - 11292.357: 95.1746% ( 13) 00:07:34.840 11292.357 - 11342.769: 95.2608% ( 15) 00:07:34.840 11342.769 - 11393.182: 95.3585% ( 17) 00:07:34.840 11393.182 - 11443.594: 95.4216% ( 11) 00:07:34.840 11443.594 - 11494.006: 95.5136% ( 16) 00:07:34.840 11494.006 - 11544.418: 95.5882% ( 13) 00:07:34.840 11544.418 - 11594.831: 95.6859% ( 17) 00:07:34.840 11594.831 - 11645.243: 95.7721% ( 15) 00:07:34.840 11645.243 - 11695.655: 95.8755% ( 18) 00:07:34.840 11695.655 - 11746.068: 95.9616% ( 15) 00:07:34.840 11746.068 - 11796.480: 96.0593% ( 17) 00:07:34.840 11796.480 - 11846.892: 96.1167% ( 10) 00:07:34.840 11846.892 - 11897.305: 96.1684% ( 9) 00:07:34.840 11897.305 - 11947.717: 96.2316% ( 11) 00:07:34.840 11947.717 - 11998.129: 96.3063% ( 13) 00:07:34.840 11998.129 - 12048.542: 96.3925% ( 15) 00:07:34.840 12048.542 - 12098.954: 96.4786% ( 15) 00:07:34.840 12098.954 - 12149.366: 96.5820% ( 18) 00:07:34.840 12149.366 - 12199.778: 96.6739% ( 16) 00:07:34.840 12199.778 - 12250.191: 96.7601% ( 15) 00:07:34.840 12250.191 - 12300.603: 96.8693% ( 19) 00:07:34.840 12300.603 - 12351.015: 96.9727% ( 18) 00:07:34.840 12351.015 - 12401.428: 97.0761% ( 18) 00:07:34.840 12401.428 - 12451.840: 97.1795% ( 18) 00:07:34.840 12451.840 - 12502.252: 97.2599% ( 14) 00:07:34.840 12502.252 - 12552.665: 97.3288% ( 12) 00:07:34.840 12552.665 - 12603.077: 97.4035% ( 13) 00:07:34.840 12603.077 - 12653.489: 97.4782% ( 13) 00:07:34.840 12653.489 - 12703.902: 97.5471% ( 12) 00:07:34.840 12703.902 - 12754.314: 97.6275% ( 14) 00:07:34.840 12754.314 - 12804.726: 97.7022% ( 13) 00:07:34.840 12804.726 - 12855.138: 97.7769% ( 13) 00:07:34.840 12855.138 - 12905.551: 97.8401% ( 11) 00:07:34.840 12905.551 - 13006.375: 97.9377% ( 17) 00:07:34.840 13006.375 - 13107.200: 98.0067% ( 12) 00:07:34.840 13107.200 - 13208.025: 98.1101% ( 18) 00:07:34.840 13208.025 - 13308.849: 98.2077% ( 17) 00:07:34.840 13308.849 - 13409.674: 98.2652% ( 10) 00:07:34.840 13409.674 - 13510.498: 98.3054% ( 7) 00:07:34.840 13510.498 - 13611.323: 98.3398% ( 6) 00:07:34.840 13611.323 - 13712.148: 98.3801% ( 7) 00:07:34.840 13712.148 - 13812.972: 98.4145% ( 6) 00:07:34.840 13812.972 - 13913.797: 98.4547% ( 7) 00:07:34.840 13913.797 - 14014.622: 98.4949% ( 7) 00:07:34.840 14014.622 - 14115.446: 98.5294% ( 6) 00:07:34.840 15224.517 - 15325.342: 98.5524% ( 4) 00:07:34.840 15325.342 - 15426.166: 98.5869% ( 6) 00:07:34.840 15426.166 - 15526.991: 98.6213% ( 6) 00:07:34.840 15526.991 - 15627.815: 98.6558% ( 6) 00:07:34.840 15627.815 - 15728.640: 98.6845% ( 5) 00:07:34.840 15728.640 - 15829.465: 98.7132% ( 5) 00:07:34.840 15829.465 - 15930.289: 98.7420% ( 5) 00:07:34.840 15930.289 - 16031.114: 98.7707% ( 5) 00:07:34.840 16031.114 - 16131.938: 98.8396% ( 12) 00:07:34.840 16131.938 - 16232.763: 98.9373% ( 17) 00:07:34.840 16232.763 - 16333.588: 99.0464% ( 19) 00:07:34.840 16333.588 - 16434.412: 99.1383% ( 16) 00:07:34.840 16434.412 - 16535.237: 99.2015% ( 11) 00:07:34.840 16535.237 - 16636.062: 99.2647% ( 11) 00:07:34.840 16636.062 - 16736.886: 99.3222% ( 10) 00:07:34.840 16736.886 - 16837.711: 99.3968% ( 13) 00:07:34.840 16837.711 - 16938.535: 99.4658% ( 12) 00:07:34.841 16938.535 - 17039.360: 99.5347% ( 12) 00:07:34.841 17039.360 - 17140.185: 99.5864% ( 9) 00:07:34.841 17140.185 - 17241.009: 99.6209% ( 6) 00:07:34.841 17241.009 - 17341.834: 99.6324% ( 2) 00:07:34.841 24197.908 - 24298.732: 99.6611% ( 5) 00:07:34.841 24298.732 - 24399.557: 99.6898% ( 5) 00:07:34.841 24399.557 - 24500.382: 99.7243% ( 6) 00:07:34.841 24500.382 - 24601.206: 99.7587% ( 6) 00:07:34.841 24601.206 - 24702.031: 99.7817% ( 4) 00:07:34.841 24702.031 - 24802.855: 99.8104% ( 5) 00:07:34.841 24802.855 - 24903.680: 99.8392% ( 5) 00:07:34.841 24903.680 - 25004.505: 99.8679% ( 5) 00:07:34.841 25004.505 - 25105.329: 99.8966% ( 5) 00:07:34.841 25105.329 - 25206.154: 99.9311% ( 6) 00:07:34.841 25206.154 - 25306.978: 99.9655% ( 6) 00:07:34.841 25306.978 - 25407.803: 99.9943% ( 5) 00:07:34.841 25407.803 - 25508.628: 100.0000% ( 1) 00:07:34.841 00:07:34.841 04:57:03 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:35.777 Initializing NVMe Controllers 00:07:35.777 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:35.777 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:35.777 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:35.777 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:35.777 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:35.777 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:35.777 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:35.777 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:35.777 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:35.777 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:35.777 Initialization complete. Launching workers. 00:07:35.777 ======================================================== 00:07:35.777 Latency(us) 00:07:35.777 Device Information : IOPS MiB/s Average min max 00:07:35.777 PCIE (0000:00:13.0) NSID 1 from core 0: 18025.22 211.23 7103.12 4791.68 23793.24 00:07:35.777 PCIE (0000:00:10.0) NSID 1 from core 0: 18025.22 211.23 7096.70 4268.38 23236.55 00:07:35.777 PCIE (0000:00:11.0) NSID 1 from core 0: 18025.22 211.23 7090.71 4221.83 22560.39 00:07:35.777 PCIE (0000:00:12.0) NSID 1 from core 0: 18025.22 211.23 7084.82 3787.59 22063.56 00:07:35.777 PCIE (0000:00:12.0) NSID 2 from core 0: 18025.22 211.23 7078.83 3503.28 21395.65 00:07:35.777 PCIE (0000:00:12.0) NSID 3 from core 0: 18025.22 211.23 7072.91 3251.79 20880.74 00:07:35.777 ======================================================== 00:07:35.777 Total : 108151.30 1267.40 7087.85 3251.79 23793.24 00:07:35.777 00:07:35.777 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:35.777 ================================================================================= 00:07:35.777 1.00000% : 6074.683us 00:07:35.777 10.00000% : 6402.363us 00:07:35.777 25.00000% : 6604.012us 00:07:35.777 50.00000% : 6805.662us 00:07:35.777 75.00000% : 7208.960us 00:07:35.777 90.00000% : 8065.969us 00:07:35.777 95.00000% : 8721.329us 00:07:35.777 98.00000% : 9275.865us 00:07:35.777 99.00000% : 12603.077us 00:07:35.777 99.50000% : 17442.658us 00:07:35.777 99.90000% : 23492.135us 00:07:35.777 99.99000% : 23794.609us 00:07:35.777 99.99900% : 23794.609us 00:07:35.777 99.99990% : 23794.609us 00:07:35.777 99.99999% : 23794.609us 00:07:35.777 00:07:35.777 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:35.777 ================================================================================= 00:07:35.777 1.00000% : 5999.065us 00:07:35.777 10.00000% : 6326.745us 00:07:35.777 25.00000% : 6503.188us 00:07:35.777 50.00000% : 6805.662us 00:07:35.777 75.00000% : 7309.785us 00:07:35.777 90.00000% : 8116.382us 00:07:35.777 95.00000% : 8670.917us 00:07:35.777 98.00000% : 9477.514us 00:07:35.777 99.00000% : 12855.138us 00:07:35.777 99.50000% : 17543.483us 00:07:35.777 99.90000% : 22887.188us 00:07:35.777 99.99000% : 23290.486us 00:07:35.777 99.99900% : 23290.486us 00:07:35.777 99.99990% : 23290.486us 00:07:35.777 99.99999% : 23290.486us 00:07:35.777 00:07:35.777 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:35.777 ================================================================================= 00:07:35.777 1.00000% : 5973.858us 00:07:35.777 10.00000% : 6402.363us 00:07:35.777 25.00000% : 6553.600us 00:07:35.777 50.00000% : 6805.662us 00:07:35.777 75.00000% : 7259.372us 00:07:35.777 90.00000% : 8166.794us 00:07:35.777 95.00000% : 8721.329us 00:07:35.777 98.00000% : 9477.514us 00:07:35.777 99.00000% : 13107.200us 00:07:35.777 99.50000% : 17039.360us 00:07:35.777 99.90000% : 22282.240us 00:07:35.777 99.99000% : 22584.714us 00:07:35.777 99.99900% : 22584.714us 00:07:35.777 99.99990% : 22584.714us 00:07:35.777 99.99999% : 22584.714us 00:07:35.777 00:07:35.777 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:35.777 ================================================================================= 00:07:35.777 1.00000% : 5973.858us 00:07:35.777 10.00000% : 6402.363us 00:07:35.777 25.00000% : 6604.012us 00:07:35.777 50.00000% : 6805.662us 00:07:35.777 75.00000% : 7208.960us 00:07:35.777 90.00000% : 8166.794us 00:07:35.777 95.00000% : 8670.917us 00:07:35.777 98.00000% : 9578.338us 00:07:35.777 99.00000% : 13409.674us 00:07:35.777 99.50000% : 17241.009us 00:07:35.777 99.90000% : 21778.117us 00:07:35.777 99.99000% : 22080.591us 00:07:35.777 99.99900% : 22080.591us 00:07:35.777 99.99990% : 22080.591us 00:07:35.777 99.99999% : 22080.591us 00:07:35.777 00:07:35.777 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:35.777 ================================================================================= 00:07:35.777 1.00000% : 5999.065us 00:07:35.777 10.00000% : 6402.363us 00:07:35.777 25.00000% : 6604.012us 00:07:35.777 50.00000% : 6805.662us 00:07:35.777 75.00000% : 7208.960us 00:07:35.777 90.00000% : 8166.794us 00:07:35.777 95.00000% : 8670.917us 00:07:35.777 98.00000% : 9376.689us 00:07:35.777 99.00000% : 13409.674us 00:07:35.777 99.50000% : 17140.185us 00:07:35.777 99.90000% : 20870.695us 00:07:35.777 99.99000% : 21475.643us 00:07:35.777 99.99900% : 21475.643us 00:07:35.777 99.99990% : 21475.643us 00:07:35.777 99.99999% : 21475.643us 00:07:35.777 00:07:35.777 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:35.777 ================================================================================= 00:07:35.777 1.00000% : 5948.652us 00:07:35.777 10.00000% : 6402.363us 00:07:35.777 25.00000% : 6604.012us 00:07:35.777 50.00000% : 6805.662us 00:07:35.777 75.00000% : 7208.960us 00:07:35.777 90.00000% : 8065.969us 00:07:35.777 95.00000% : 8721.329us 00:07:35.777 98.00000% : 9225.452us 00:07:35.777 99.00000% : 12653.489us 00:07:35.777 99.50000% : 16837.711us 00:07:35.777 99.90000% : 20265.748us 00:07:35.777 99.99000% : 20870.695us 00:07:35.777 99.99900% : 20971.520us 00:07:35.777 99.99990% : 20971.520us 00:07:35.777 99.99999% : 20971.520us 00:07:35.777 00:07:35.777 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:35.777 ============================================================================== 00:07:35.777 Range in us Cumulative IO count 00:07:35.777 4789.169 - 4814.375: 0.0609% ( 11) 00:07:35.777 4814.375 - 4839.582: 0.1164% ( 10) 00:07:35.777 4839.582 - 4864.788: 0.1939% ( 14) 00:07:35.777 4864.788 - 4889.994: 0.2549% ( 11) 00:07:35.777 4889.994 - 4915.200: 0.2770% ( 4) 00:07:35.777 4915.200 - 4940.406: 0.2881% ( 2) 00:07:35.777 4940.406 - 4965.612: 0.3047% ( 3) 00:07:35.777 4965.612 - 4990.818: 0.3158% ( 2) 00:07:35.777 4990.818 - 5016.025: 0.3269% ( 2) 00:07:35.778 5016.025 - 5041.231: 0.3380% ( 2) 00:07:35.778 5041.231 - 5066.437: 0.3491% ( 2) 00:07:35.778 5066.437 - 5091.643: 0.3546% ( 1) 00:07:35.778 5747.003 - 5772.209: 0.3602% ( 1) 00:07:35.778 5797.415 - 5822.622: 0.3657% ( 1) 00:07:35.778 5847.828 - 5873.034: 0.3934% ( 5) 00:07:35.778 5873.034 - 5898.240: 0.4211% ( 5) 00:07:35.778 5898.240 - 5923.446: 0.4488% ( 5) 00:07:35.778 5923.446 - 5948.652: 0.5098% ( 11) 00:07:35.778 5948.652 - 5973.858: 0.5762% ( 12) 00:07:35.778 5973.858 - 5999.065: 0.6261% ( 9) 00:07:35.778 5999.065 - 6024.271: 0.7923% ( 30) 00:07:35.778 6024.271 - 6049.477: 0.9198% ( 23) 00:07:35.778 6049.477 - 6074.683: 1.0694% ( 27) 00:07:35.778 6074.683 - 6099.889: 1.3021% ( 42) 00:07:35.778 6099.889 - 6125.095: 1.5126% ( 38) 00:07:35.778 6125.095 - 6150.302: 1.9947% ( 87) 00:07:35.778 6150.302 - 6175.508: 2.3825% ( 70) 00:07:35.778 6175.508 - 6200.714: 3.0031% ( 112) 00:07:35.778 6200.714 - 6225.920: 3.5184% ( 93) 00:07:35.778 6225.920 - 6251.126: 4.0780% ( 101) 00:07:35.778 6251.126 - 6276.332: 5.0421% ( 174) 00:07:35.778 6276.332 - 6301.538: 5.9730% ( 168) 00:07:35.778 6301.538 - 6326.745: 7.1033% ( 204) 00:07:35.778 6326.745 - 6351.951: 8.3056% ( 217) 00:07:35.778 6351.951 - 6377.157: 9.4193% ( 201) 00:07:35.778 6377.157 - 6402.363: 10.6217% ( 217) 00:07:35.778 6402.363 - 6427.569: 12.0900% ( 265) 00:07:35.778 6427.569 - 6452.775: 13.9849% ( 342) 00:07:35.778 6452.775 - 6503.188: 18.8276% ( 874) 00:07:35.778 6503.188 - 6553.600: 24.3850% ( 1003) 00:07:35.778 6553.600 - 6604.012: 30.5851% ( 1119) 00:07:35.778 6604.012 - 6654.425: 36.8684% ( 1134) 00:07:35.778 6654.425 - 6704.837: 42.5643% ( 1028) 00:07:35.778 6704.837 - 6755.249: 47.8003% ( 945) 00:07:35.778 6755.249 - 6805.662: 52.5986% ( 866) 00:07:35.778 6805.662 - 6856.074: 56.8096% ( 760) 00:07:35.778 6856.074 - 6906.486: 60.7602% ( 713) 00:07:35.778 6906.486 - 6956.898: 63.8409% ( 556) 00:07:35.778 6956.898 - 7007.311: 66.7498% ( 525) 00:07:35.778 7007.311 - 7057.723: 69.3983% ( 478) 00:07:35.778 7057.723 - 7108.135: 71.4428% ( 369) 00:07:35.778 7108.135 - 7158.548: 73.5372% ( 378) 00:07:35.778 7158.548 - 7208.960: 75.0720% ( 277) 00:07:35.778 7208.960 - 7259.372: 76.5514% ( 267) 00:07:35.778 7259.372 - 7309.785: 78.1472% ( 288) 00:07:35.778 7309.785 - 7360.197: 79.1611% ( 183) 00:07:35.778 7360.197 - 7410.609: 80.0809% ( 166) 00:07:35.778 7410.609 - 7461.022: 81.0893% ( 182) 00:07:35.778 7461.022 - 7511.434: 81.8373% ( 135) 00:07:35.778 7511.434 - 7561.846: 82.8457% ( 182) 00:07:35.778 7561.846 - 7612.258: 84.1201% ( 230) 00:07:35.778 7612.258 - 7662.671: 85.0787% ( 173) 00:07:35.778 7662.671 - 7713.083: 85.7768% ( 126) 00:07:35.778 7713.083 - 7763.495: 86.3531% ( 104) 00:07:35.778 7763.495 - 7813.908: 87.0290% ( 122) 00:07:35.778 7813.908 - 7864.320: 87.7660% ( 133) 00:07:35.778 7864.320 - 7914.732: 88.5250% ( 137) 00:07:35.778 7914.732 - 7965.145: 89.1899% ( 120) 00:07:35.778 7965.145 - 8015.557: 89.8050% ( 111) 00:07:35.778 8015.557 - 8065.969: 90.3535% ( 99) 00:07:35.778 8065.969 - 8116.382: 90.7247% ( 67) 00:07:35.778 8116.382 - 8166.794: 91.1070% ( 69) 00:07:35.778 8166.794 - 8217.206: 91.7276% ( 112) 00:07:35.778 8217.206 - 8267.618: 92.2595% ( 96) 00:07:35.778 8267.618 - 8318.031: 92.5864% ( 59) 00:07:35.778 8318.031 - 8368.443: 92.8358% ( 45) 00:07:35.778 8368.443 - 8418.855: 93.0574% ( 40) 00:07:35.778 8418.855 - 8469.268: 93.5062% ( 81) 00:07:35.778 8469.268 - 8519.680: 93.7888% ( 51) 00:07:35.778 8519.680 - 8570.092: 94.0603% ( 49) 00:07:35.778 8570.092 - 8620.505: 94.3983% ( 61) 00:07:35.778 8620.505 - 8670.917: 94.6476% ( 45) 00:07:35.778 8670.917 - 8721.329: 95.1961% ( 99) 00:07:35.778 8721.329 - 8771.742: 95.6283% ( 78) 00:07:35.778 8771.742 - 8822.154: 95.8389% ( 38) 00:07:35.778 8822.154 - 8872.566: 96.1048% ( 48) 00:07:35.778 8872.566 - 8922.978: 96.3708% ( 48) 00:07:35.778 8922.978 - 8973.391: 96.6589% ( 52) 00:07:35.778 8973.391 - 9023.803: 96.9415% ( 51) 00:07:35.778 9023.803 - 9074.215: 97.1133% ( 31) 00:07:35.778 9074.215 - 9124.628: 97.3072% ( 35) 00:07:35.778 9124.628 - 9175.040: 97.4789% ( 31) 00:07:35.778 9175.040 - 9225.452: 97.9388% ( 83) 00:07:35.778 9225.452 - 9275.865: 98.0663% ( 23) 00:07:35.778 9275.865 - 9326.277: 98.1494% ( 15) 00:07:35.778 9326.277 - 9376.689: 98.2270% ( 14) 00:07:35.778 9376.689 - 9427.102: 98.2824% ( 10) 00:07:35.778 9427.102 - 9477.514: 98.3045% ( 4) 00:07:35.778 9477.514 - 9527.926: 98.3488% ( 8) 00:07:35.778 9527.926 - 9578.338: 98.4264% ( 14) 00:07:35.778 9578.338 - 9628.751: 98.4929% ( 12) 00:07:35.778 9628.751 - 9679.163: 98.5317% ( 7) 00:07:35.778 9679.163 - 9729.575: 98.5483% ( 3) 00:07:35.778 9729.575 - 9779.988: 98.5649% ( 3) 00:07:35.778 9779.988 - 9830.400: 98.5760% ( 2) 00:07:35.778 9830.400 - 9880.812: 98.5816% ( 1) 00:07:35.778 11695.655 - 11746.068: 98.5871% ( 1) 00:07:35.778 11746.068 - 11796.480: 98.6037% ( 3) 00:07:35.778 11796.480 - 11846.892: 98.6259% ( 4) 00:07:35.778 11846.892 - 11897.305: 98.6480% ( 4) 00:07:35.778 11897.305 - 11947.717: 98.6647% ( 3) 00:07:35.778 11947.717 - 11998.129: 98.6868% ( 4) 00:07:35.778 11998.129 - 12048.542: 98.7201% ( 6) 00:07:35.778 12048.542 - 12098.954: 98.7699% ( 9) 00:07:35.778 12098.954 - 12149.366: 98.8143% ( 8) 00:07:35.778 12149.366 - 12199.778: 98.8309% ( 3) 00:07:35.778 12199.778 - 12250.191: 98.8475% ( 3) 00:07:35.778 12250.191 - 12300.603: 98.8697% ( 4) 00:07:35.778 12300.603 - 12351.015: 98.8918% ( 4) 00:07:35.778 12351.015 - 12401.428: 98.9140% ( 4) 00:07:35.778 12401.428 - 12451.840: 98.9417% ( 5) 00:07:35.778 12451.840 - 12502.252: 98.9694% ( 5) 00:07:35.778 12502.252 - 12552.665: 98.9971% ( 5) 00:07:35.778 12552.665 - 12603.077: 99.0414% ( 8) 00:07:35.778 12603.077 - 12653.489: 99.1190% ( 14) 00:07:35.778 12653.489 - 12703.902: 99.1910% ( 13) 00:07:35.778 12703.902 - 12754.314: 99.2575% ( 12) 00:07:35.778 12754.314 - 12804.726: 99.2797% ( 4) 00:07:35.778 12804.726 - 12855.138: 99.2908% ( 2) 00:07:35.778 16131.938 - 16232.763: 99.2963% ( 1) 00:07:35.778 16232.763 - 16333.588: 99.3074% ( 2) 00:07:35.778 16333.588 - 16434.412: 99.3240% ( 3) 00:07:35.778 16434.412 - 16535.237: 99.3406% ( 3) 00:07:35.778 16535.237 - 16636.062: 99.3573% ( 3) 00:07:35.778 16636.062 - 16736.886: 99.3739% ( 3) 00:07:35.778 16736.886 - 16837.711: 99.3961% ( 4) 00:07:35.778 16837.711 - 16938.535: 99.4127% ( 3) 00:07:35.778 16938.535 - 17039.360: 99.4293% ( 3) 00:07:35.778 17039.360 - 17140.185: 99.4515% ( 4) 00:07:35.778 17140.185 - 17241.009: 99.4681% ( 3) 00:07:35.778 17241.009 - 17341.834: 99.4847% ( 3) 00:07:35.778 17341.834 - 17442.658: 99.5013% ( 3) 00:07:35.778 17442.658 - 17543.483: 99.5180% ( 3) 00:07:35.778 17543.483 - 17644.308: 99.5346% ( 3) 00:07:35.778 17644.308 - 17745.132: 99.5567% ( 4) 00:07:35.778 17745.132 - 17845.957: 99.5734% ( 3) 00:07:35.778 17845.957 - 17946.782: 99.5900% ( 3) 00:07:35.778 17946.782 - 18047.606: 99.6121% ( 4) 00:07:35.778 18047.606 - 18148.431: 99.6288% ( 3) 00:07:35.778 18148.431 - 18249.255: 99.6454% ( 3) 00:07:35.778 22483.889 - 22584.714: 99.6620% ( 3) 00:07:35.778 22584.714 - 22685.538: 99.7008% ( 7) 00:07:35.778 22685.538 - 22786.363: 99.7340% ( 6) 00:07:35.778 22786.363 - 22887.188: 99.7562% ( 4) 00:07:35.778 22988.012 - 23088.837: 99.7673% ( 2) 00:07:35.778 23088.837 - 23189.662: 99.8005% ( 6) 00:07:35.778 23189.662 - 23290.486: 99.8338% ( 6) 00:07:35.778 23290.486 - 23391.311: 99.8670% ( 6) 00:07:35.778 23391.311 - 23492.135: 99.9003% ( 6) 00:07:35.778 23492.135 - 23592.960: 99.9280% ( 5) 00:07:35.778 23592.960 - 23693.785: 99.9668% ( 7) 00:07:35.778 23693.785 - 23794.609: 100.0000% ( 6) 00:07:35.778 00:07:35.778 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:35.778 ============================================================================== 00:07:35.778 Range in us Cumulative IO count 00:07:35.778 4259.840 - 4285.046: 0.0111% ( 2) 00:07:35.778 4285.046 - 4310.252: 0.0222% ( 2) 00:07:35.778 4310.252 - 4335.458: 0.0332% ( 2) 00:07:35.778 4335.458 - 4360.665: 0.0443% ( 2) 00:07:35.778 4360.665 - 4385.871: 0.0499% ( 1) 00:07:35.778 4385.871 - 4411.077: 0.0609% ( 2) 00:07:35.778 4411.077 - 4436.283: 0.0720% ( 2) 00:07:35.778 4436.283 - 4461.489: 0.0831% ( 2) 00:07:35.778 4461.489 - 4486.695: 0.1108% ( 5) 00:07:35.778 4486.695 - 4511.902: 0.1274% ( 3) 00:07:35.778 4511.902 - 4537.108: 0.1441% ( 3) 00:07:35.778 4537.108 - 4562.314: 0.1496% ( 1) 00:07:35.778 4562.314 - 4587.520: 0.1551% ( 1) 00:07:35.778 4587.520 - 4612.726: 0.1662% ( 2) 00:07:35.778 4612.726 - 4637.932: 0.1773% ( 2) 00:07:35.778 4637.932 - 4663.138: 0.1828% ( 1) 00:07:35.778 4663.138 - 4688.345: 0.1884% ( 1) 00:07:35.778 4688.345 - 4713.551: 0.1995% ( 2) 00:07:35.778 4713.551 - 4738.757: 0.2050% ( 1) 00:07:35.778 4738.757 - 4763.963: 0.2105% ( 1) 00:07:35.778 4763.963 - 4789.169: 0.2216% ( 2) 00:07:35.778 4789.169 - 4814.375: 0.2272% ( 1) 00:07:35.778 4814.375 - 4839.582: 0.2493% ( 4) 00:07:35.778 4839.582 - 4864.788: 0.3546% ( 19) 00:07:35.778 5570.560 - 5595.766: 0.3602% ( 1) 00:07:35.778 5595.766 - 5620.972: 0.3657% ( 1) 00:07:35.778 5620.972 - 5646.178: 0.3712% ( 1) 00:07:35.778 5696.591 - 5721.797: 0.3768% ( 1) 00:07:35.779 5747.003 - 5772.209: 0.3934% ( 3) 00:07:35.779 5772.209 - 5797.415: 0.4045% ( 2) 00:07:35.779 5797.415 - 5822.622: 0.4654% ( 11) 00:07:35.779 5822.622 - 5847.828: 0.5375% ( 13) 00:07:35.779 5847.828 - 5873.034: 0.5929% ( 10) 00:07:35.779 5873.034 - 5898.240: 0.6039% ( 2) 00:07:35.779 5898.240 - 5923.446: 0.7425% ( 25) 00:07:35.779 5923.446 - 5948.652: 0.8367% ( 17) 00:07:35.779 5948.652 - 5973.858: 0.9031% ( 12) 00:07:35.779 5973.858 - 5999.065: 1.1137% ( 38) 00:07:35.779 5999.065 - 6024.271: 1.4295% ( 57) 00:07:35.779 6024.271 - 6049.477: 1.7287% ( 54) 00:07:35.779 6049.477 - 6074.683: 2.1997% ( 85) 00:07:35.779 6074.683 - 6099.889: 2.9422% ( 134) 00:07:35.779 6099.889 - 6125.095: 3.5295% ( 106) 00:07:35.779 6125.095 - 6150.302: 4.1611% ( 114) 00:07:35.779 6150.302 - 6175.508: 4.6764% ( 93) 00:07:35.779 6175.508 - 6200.714: 5.3524% ( 122) 00:07:35.779 6200.714 - 6225.920: 6.2722% ( 166) 00:07:35.779 6225.920 - 6251.126: 7.2252% ( 172) 00:07:35.779 6251.126 - 6276.332: 8.1837% ( 173) 00:07:35.779 6276.332 - 6301.538: 9.2919% ( 200) 00:07:35.779 6301.538 - 6326.745: 10.8821% ( 287) 00:07:35.779 6326.745 - 6351.951: 12.5332% ( 298) 00:07:35.779 6351.951 - 6377.157: 14.0680% ( 277) 00:07:35.779 6377.157 - 6402.363: 15.9630% ( 342) 00:07:35.779 6402.363 - 6427.569: 18.1793% ( 400) 00:07:35.779 6427.569 - 6452.775: 20.5729% ( 432) 00:07:35.779 6452.775 - 6503.188: 25.4322% ( 877) 00:07:35.779 6503.188 - 6553.600: 29.7651% ( 782) 00:07:35.779 6553.600 - 6604.012: 34.2642% ( 812) 00:07:35.779 6604.012 - 6654.425: 38.4031% ( 747) 00:07:35.779 6654.425 - 6704.837: 42.2872% ( 701) 00:07:35.779 6704.837 - 6755.249: 46.5869% ( 776) 00:07:35.779 6755.249 - 6805.662: 50.7258% ( 747) 00:07:35.779 6805.662 - 6856.074: 54.1057% ( 610) 00:07:35.779 6856.074 - 6906.486: 57.2529% ( 568) 00:07:35.779 6906.486 - 6956.898: 60.2172% ( 535) 00:07:35.779 6956.898 - 7007.311: 63.2092% ( 540) 00:07:35.779 7007.311 - 7057.723: 66.0461% ( 512) 00:07:35.779 7057.723 - 7108.135: 68.6891% ( 477) 00:07:35.779 7108.135 - 7158.548: 71.1492% ( 444) 00:07:35.779 7158.548 - 7208.960: 73.0995% ( 352) 00:07:35.779 7208.960 - 7259.372: 74.8670% ( 319) 00:07:35.779 7259.372 - 7309.785: 76.7841% ( 346) 00:07:35.779 7309.785 - 7360.197: 78.2635% ( 267) 00:07:35.779 7360.197 - 7410.609: 79.6432% ( 249) 00:07:35.779 7410.609 - 7461.022: 80.8954% ( 226) 00:07:35.779 7461.022 - 7511.434: 81.7431% ( 153) 00:07:35.779 7511.434 - 7561.846: 82.6795% ( 169) 00:07:35.779 7561.846 - 7612.258: 83.4829% ( 145) 00:07:35.779 7612.258 - 7662.671: 84.2476% ( 138) 00:07:35.779 7662.671 - 7713.083: 85.0288% ( 141) 00:07:35.779 7713.083 - 7763.495: 85.9707% ( 170) 00:07:35.779 7763.495 - 7813.908: 86.5747% ( 109) 00:07:35.779 7813.908 - 7864.320: 87.2728% ( 126) 00:07:35.779 7864.320 - 7914.732: 87.9433% ( 121) 00:07:35.779 7914.732 - 7965.145: 88.6636% ( 130) 00:07:35.779 7965.145 - 8015.557: 89.3617% ( 126) 00:07:35.779 8015.557 - 8065.969: 89.9989% ( 115) 00:07:35.779 8065.969 - 8116.382: 90.5918% ( 107) 00:07:35.779 8116.382 - 8166.794: 91.1514% ( 101) 00:07:35.779 8166.794 - 8217.206: 91.5448% ( 71) 00:07:35.779 8217.206 - 8267.618: 91.8163% ( 49) 00:07:35.779 8267.618 - 8318.031: 92.1764% ( 65) 00:07:35.779 8318.031 - 8368.443: 92.4756% ( 54) 00:07:35.779 8368.443 - 8418.855: 92.8136% ( 61) 00:07:35.779 8418.855 - 8469.268: 93.2569% ( 80) 00:07:35.779 8469.268 - 8519.680: 93.8054% ( 99) 00:07:35.779 8519.680 - 8570.092: 94.2598% ( 82) 00:07:35.779 8570.092 - 8620.505: 94.6698% ( 74) 00:07:35.779 8620.505 - 8670.917: 95.0576% ( 70) 00:07:35.779 8670.917 - 8721.329: 95.4289% ( 67) 00:07:35.779 8721.329 - 8771.742: 95.7668% ( 61) 00:07:35.779 8771.742 - 8822.154: 95.9940% ( 41) 00:07:35.779 8822.154 - 8872.566: 96.1879% ( 35) 00:07:35.779 8872.566 - 8922.978: 96.3652% ( 32) 00:07:35.779 8922.978 - 8973.391: 96.5980% ( 42) 00:07:35.779 8973.391 - 9023.803: 96.8141% ( 39) 00:07:35.779 9023.803 - 9074.215: 96.9803% ( 30) 00:07:35.779 9074.215 - 9124.628: 97.1133% ( 24) 00:07:35.779 9124.628 - 9175.040: 97.2407% ( 23) 00:07:35.779 9175.040 - 9225.452: 97.3681% ( 23) 00:07:35.779 9225.452 - 9275.865: 97.5510% ( 33) 00:07:35.779 9275.865 - 9326.277: 97.6396% ( 16) 00:07:35.779 9326.277 - 9376.689: 97.7504% ( 20) 00:07:35.779 9376.689 - 9427.102: 97.8945% ( 26) 00:07:35.779 9427.102 - 9477.514: 98.0109% ( 21) 00:07:35.779 9477.514 - 9527.926: 98.0995% ( 16) 00:07:35.779 9527.926 - 9578.338: 98.1715% ( 13) 00:07:35.779 9578.338 - 9628.751: 98.2436% ( 13) 00:07:35.779 9628.751 - 9679.163: 98.2934% ( 9) 00:07:35.779 9679.163 - 9729.575: 98.3433% ( 9) 00:07:35.779 9729.575 - 9779.988: 98.3821% ( 7) 00:07:35.779 9779.988 - 9830.400: 98.4098% ( 5) 00:07:35.779 9830.400 - 9880.812: 98.4541% ( 8) 00:07:35.779 9880.812 - 9931.225: 98.4929% ( 7) 00:07:35.779 9931.225 - 9981.637: 98.5095% ( 3) 00:07:35.779 9981.637 - 10032.049: 98.5317% ( 4) 00:07:35.779 10032.049 - 10082.462: 98.5594% ( 5) 00:07:35.779 10082.462 - 10132.874: 98.5705% ( 2) 00:07:35.779 10132.874 - 10183.286: 98.5816% ( 2) 00:07:35.779 10233.698 - 10284.111: 98.5871% ( 1) 00:07:35.779 10636.997 - 10687.409: 98.5982% ( 2) 00:07:35.779 10687.409 - 10737.822: 98.6148% ( 3) 00:07:35.779 10737.822 - 10788.234: 98.6314% ( 3) 00:07:35.779 10788.234 - 10838.646: 98.6591% ( 5) 00:07:35.779 10838.646 - 10889.058: 98.6758% ( 3) 00:07:35.779 10889.058 - 10939.471: 98.7145% ( 7) 00:07:35.779 10939.471 - 10989.883: 98.7644% ( 9) 00:07:35.779 11040.295 - 11090.708: 98.7810% ( 3) 00:07:35.779 11090.708 - 11141.120: 98.7866% ( 1) 00:07:35.779 11141.120 - 11191.532: 98.7921% ( 1) 00:07:35.779 11191.532 - 11241.945: 98.7977% ( 1) 00:07:35.779 11241.945 - 11292.357: 98.8087% ( 2) 00:07:35.779 11292.357 - 11342.769: 98.8143% ( 1) 00:07:35.779 11342.769 - 11393.182: 98.8198% ( 1) 00:07:35.779 11393.182 - 11443.594: 98.8254% ( 1) 00:07:35.779 11443.594 - 11494.006: 98.8309% ( 1) 00:07:35.779 11494.006 - 11544.418: 98.8420% ( 2) 00:07:35.779 11544.418 - 11594.831: 98.8475% ( 1) 00:07:35.779 11594.831 - 11645.243: 98.8531% ( 1) 00:07:35.779 11695.655 - 11746.068: 98.8586% ( 1) 00:07:35.779 11746.068 - 11796.480: 98.8641% ( 1) 00:07:35.779 11796.480 - 11846.892: 98.8697% ( 1) 00:07:35.779 11846.892 - 11897.305: 98.8752% ( 1) 00:07:35.779 11897.305 - 11947.717: 98.8863% ( 2) 00:07:35.779 11947.717 - 11998.129: 98.8918% ( 1) 00:07:35.779 11998.129 - 12048.542: 98.9029% ( 2) 00:07:35.779 12098.954 - 12149.366: 98.9085% ( 1) 00:07:35.779 12149.366 - 12199.778: 98.9140% ( 1) 00:07:35.779 12199.778 - 12250.191: 98.9251% ( 2) 00:07:35.779 12250.191 - 12300.603: 98.9306% ( 1) 00:07:35.779 12300.603 - 12351.015: 98.9362% ( 1) 00:07:35.779 12451.840 - 12502.252: 98.9417% ( 1) 00:07:35.779 12502.252 - 12552.665: 98.9639% ( 4) 00:07:35.779 12552.665 - 12603.077: 98.9694% ( 1) 00:07:35.779 12603.077 - 12653.489: 98.9750% ( 1) 00:07:35.779 12653.489 - 12703.902: 98.9860% ( 2) 00:07:35.779 12754.314 - 12804.726: 98.9971% ( 2) 00:07:35.779 12804.726 - 12855.138: 99.0082% ( 2) 00:07:35.779 12855.138 - 12905.551: 99.0304% ( 4) 00:07:35.779 12905.551 - 13006.375: 99.0802% ( 9) 00:07:35.779 13006.375 - 13107.200: 99.1079% ( 5) 00:07:35.779 13107.200 - 13208.025: 99.1633% ( 10) 00:07:35.779 13208.025 - 13308.849: 99.2021% ( 7) 00:07:35.779 13308.849 - 13409.674: 99.2686% ( 12) 00:07:35.779 13409.674 - 13510.498: 99.2908% ( 4) 00:07:35.779 16837.711 - 16938.535: 99.3573% ( 12) 00:07:35.779 16938.535 - 17039.360: 99.3739% ( 3) 00:07:35.779 17039.360 - 17140.185: 99.4016% ( 5) 00:07:35.779 17140.185 - 17241.009: 99.4182% ( 3) 00:07:35.779 17241.009 - 17341.834: 99.4515% ( 6) 00:07:35.779 17341.834 - 17442.658: 99.4847% ( 6) 00:07:35.779 17442.658 - 17543.483: 99.5069% ( 4) 00:07:35.779 17543.483 - 17644.308: 99.5346% ( 5) 00:07:35.779 17644.308 - 17745.132: 99.5623% ( 5) 00:07:35.779 17745.132 - 17845.957: 99.5900% ( 5) 00:07:35.779 17845.957 - 17946.782: 99.6177% ( 5) 00:07:35.779 17946.782 - 18047.606: 99.6454% ( 5) 00:07:35.779 21979.766 - 22080.591: 99.6731% ( 5) 00:07:35.779 22080.591 - 22181.415: 99.7063% ( 6) 00:07:35.779 22181.415 - 22282.240: 99.7340% ( 5) 00:07:35.779 22282.240 - 22383.065: 99.7617% ( 5) 00:07:35.779 22383.065 - 22483.889: 99.7950% ( 6) 00:07:35.779 22483.889 - 22584.714: 99.8227% ( 5) 00:07:35.779 22584.714 - 22685.538: 99.8449% ( 4) 00:07:35.779 22685.538 - 22786.363: 99.8726% ( 5) 00:07:35.779 22786.363 - 22887.188: 99.9003% ( 5) 00:07:35.779 22887.188 - 22988.012: 99.9280% ( 5) 00:07:35.779 22988.012 - 23088.837: 99.9557% ( 5) 00:07:35.779 23088.837 - 23189.662: 99.9889% ( 6) 00:07:35.779 23189.662 - 23290.486: 100.0000% ( 2) 00:07:35.779 00:07:35.779 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:35.779 ============================================================================== 00:07:35.779 Range in us Cumulative IO count 00:07:35.779 4209.428 - 4234.634: 0.0111% ( 2) 00:07:35.779 4234.634 - 4259.840: 0.0499% ( 7) 00:07:35.779 4259.840 - 4285.046: 0.1053% ( 10) 00:07:35.779 4285.046 - 4310.252: 0.1551% ( 9) 00:07:35.779 4310.252 - 4335.458: 0.1995% ( 8) 00:07:35.779 4335.458 - 4360.665: 0.2105% ( 2) 00:07:35.779 4360.665 - 4385.871: 0.2272% ( 3) 00:07:35.779 4385.871 - 4411.077: 0.2383% ( 2) 00:07:35.780 4411.077 - 4436.283: 0.2493% ( 2) 00:07:35.780 4436.283 - 4461.489: 0.2660% ( 3) 00:07:35.780 4461.489 - 4486.695: 0.2826% ( 3) 00:07:35.780 4486.695 - 4511.902: 0.2937% ( 2) 00:07:35.780 4511.902 - 4537.108: 0.3103% ( 3) 00:07:35.780 4537.108 - 4562.314: 0.3214% ( 2) 00:07:35.780 4562.314 - 4587.520: 0.3324% ( 2) 00:07:35.780 4587.520 - 4612.726: 0.3491% ( 3) 00:07:35.780 4612.726 - 4637.932: 0.3546% ( 1) 00:07:35.780 5671.385 - 5696.591: 0.3602% ( 1) 00:07:35.780 5747.003 - 5772.209: 0.3657% ( 1) 00:07:35.780 5772.209 - 5797.415: 0.3712% ( 1) 00:07:35.780 5797.415 - 5822.622: 0.3823% ( 2) 00:07:35.780 5822.622 - 5847.828: 0.4156% ( 6) 00:07:35.780 5847.828 - 5873.034: 0.4599% ( 8) 00:07:35.780 5873.034 - 5898.240: 0.5319% ( 13) 00:07:35.780 5898.240 - 5923.446: 0.6926% ( 29) 00:07:35.780 5923.446 - 5948.652: 0.8034% ( 20) 00:07:35.780 5948.652 - 5973.858: 1.1303% ( 59) 00:07:35.780 5973.858 - 5999.065: 1.3187% ( 34) 00:07:35.780 5999.065 - 6024.271: 1.5293% ( 38) 00:07:35.780 6024.271 - 6049.477: 1.6955% ( 30) 00:07:35.780 6049.477 - 6074.683: 1.9393% ( 44) 00:07:35.780 6074.683 - 6099.889: 2.1941% ( 46) 00:07:35.780 6099.889 - 6125.095: 2.3825% ( 34) 00:07:35.780 6125.095 - 6150.302: 2.6430% ( 47) 00:07:35.780 6150.302 - 6175.508: 2.9255% ( 51) 00:07:35.780 6175.508 - 6200.714: 3.4187% ( 89) 00:07:35.780 6200.714 - 6225.920: 3.9894% ( 103) 00:07:35.780 6225.920 - 6251.126: 4.7762% ( 142) 00:07:35.780 6251.126 - 6276.332: 5.3856% ( 110) 00:07:35.780 6276.332 - 6301.538: 6.1281% ( 134) 00:07:35.780 6301.538 - 6326.745: 7.0534% ( 167) 00:07:35.780 6326.745 - 6351.951: 8.3056% ( 226) 00:07:35.780 6351.951 - 6377.157: 9.4138% ( 200) 00:07:35.780 6377.157 - 6402.363: 10.7491% ( 241) 00:07:35.780 6402.363 - 6427.569: 12.5609% ( 327) 00:07:35.780 6427.569 - 6452.775: 14.6886% ( 384) 00:07:35.780 6452.775 - 6503.188: 19.6587% ( 897) 00:07:35.780 6503.188 - 6553.600: 25.4211% ( 1040) 00:07:35.780 6553.600 - 6604.012: 31.4827% ( 1094) 00:07:35.780 6604.012 - 6654.425: 37.9488% ( 1167) 00:07:35.780 6654.425 - 6704.837: 43.5173% ( 1005) 00:07:35.780 6704.837 - 6755.249: 48.2879% ( 861) 00:07:35.780 6755.249 - 6805.662: 53.2912% ( 903) 00:07:35.780 6805.662 - 6856.074: 57.1975% ( 705) 00:07:35.780 6856.074 - 6906.486: 61.3364% ( 747) 00:07:35.780 6906.486 - 6956.898: 64.7939% ( 624) 00:07:35.780 6956.898 - 7007.311: 67.0601% ( 409) 00:07:35.780 7007.311 - 7057.723: 69.3041% ( 405) 00:07:35.780 7057.723 - 7108.135: 71.0882% ( 322) 00:07:35.780 7108.135 - 7158.548: 73.1937% ( 380) 00:07:35.780 7158.548 - 7208.960: 74.9612% ( 319) 00:07:35.780 7208.960 - 7259.372: 76.4517% ( 269) 00:07:35.780 7259.372 - 7309.785: 78.0308% ( 285) 00:07:35.780 7309.785 - 7360.197: 78.9727% ( 170) 00:07:35.780 7360.197 - 7410.609: 80.3358% ( 246) 00:07:35.780 7410.609 - 7461.022: 81.5880% ( 226) 00:07:35.780 7461.022 - 7511.434: 82.6629% ( 194) 00:07:35.780 7511.434 - 7561.846: 83.5605% ( 162) 00:07:35.780 7561.846 - 7612.258: 84.3251% ( 138) 00:07:35.780 7612.258 - 7662.671: 85.1175% ( 143) 00:07:35.780 7662.671 - 7713.083: 85.7380% ( 112) 00:07:35.780 7713.083 - 7763.495: 86.4860% ( 135) 00:07:35.780 7763.495 - 7813.908: 86.8850% ( 72) 00:07:35.780 7813.908 - 7864.320: 87.3338% ( 81) 00:07:35.780 7864.320 - 7914.732: 87.7272% ( 71) 00:07:35.780 7914.732 - 7965.145: 88.1150% ( 70) 00:07:35.780 7965.145 - 8015.557: 88.5805% ( 84) 00:07:35.780 8015.557 - 8065.969: 89.2730% ( 125) 00:07:35.780 8065.969 - 8116.382: 89.8992% ( 113) 00:07:35.780 8116.382 - 8166.794: 90.5862% ( 124) 00:07:35.780 8166.794 - 8217.206: 90.9187% ( 60) 00:07:35.780 8217.206 - 8267.618: 91.5171% ( 108) 00:07:35.780 8267.618 - 8318.031: 91.8717% ( 64) 00:07:35.780 8318.031 - 8368.443: 92.2762% ( 73) 00:07:35.780 8368.443 - 8418.855: 92.6141% ( 61) 00:07:35.780 8418.855 - 8469.268: 93.0574% ( 80) 00:07:35.780 8469.268 - 8519.680: 93.5782% ( 94) 00:07:35.780 8519.680 - 8570.092: 93.9938% ( 75) 00:07:35.780 8570.092 - 8620.505: 94.4758% ( 87) 00:07:35.780 8620.505 - 8670.917: 94.9080% ( 78) 00:07:35.780 8670.917 - 8721.329: 95.4455% ( 97) 00:07:35.780 8721.329 - 8771.742: 95.8887% ( 80) 00:07:35.780 8771.742 - 8822.154: 96.1547% ( 48) 00:07:35.780 8822.154 - 8872.566: 96.5426% ( 70) 00:07:35.780 8872.566 - 8922.978: 96.6922% ( 27) 00:07:35.780 8922.978 - 8973.391: 96.8030% ( 20) 00:07:35.780 8973.391 - 9023.803: 96.8972% ( 17) 00:07:35.780 9023.803 - 9074.215: 96.9858% ( 16) 00:07:35.780 9074.215 - 9124.628: 97.0745% ( 16) 00:07:35.780 9124.628 - 9175.040: 97.1410% ( 12) 00:07:35.780 9175.040 - 9225.452: 97.2019% ( 11) 00:07:35.780 9225.452 - 9275.865: 97.3404% ( 25) 00:07:35.780 9275.865 - 9326.277: 97.6064% ( 48) 00:07:35.780 9326.277 - 9376.689: 97.7948% ( 34) 00:07:35.780 9376.689 - 9427.102: 97.8890% ( 17) 00:07:35.780 9427.102 - 9477.514: 98.1106% ( 40) 00:07:35.780 9477.514 - 9527.926: 98.1771% ( 12) 00:07:35.780 9527.926 - 9578.338: 98.2436% ( 12) 00:07:35.780 9578.338 - 9628.751: 98.2934% ( 9) 00:07:35.780 9628.751 - 9679.163: 98.3599% ( 12) 00:07:35.780 9679.163 - 9729.575: 98.4486% ( 16) 00:07:35.780 9729.575 - 9779.988: 98.5262% ( 14) 00:07:35.780 9779.988 - 9830.400: 98.5594% ( 6) 00:07:35.780 9830.400 - 9880.812: 98.5760% ( 3) 00:07:35.780 9880.812 - 9931.225: 98.5816% ( 1) 00:07:35.780 10132.874 - 10183.286: 98.5871% ( 1) 00:07:35.780 10334.523 - 10384.935: 98.6037% ( 3) 00:07:35.780 10384.935 - 10435.348: 98.6813% ( 14) 00:07:35.780 10435.348 - 10485.760: 98.7977% ( 21) 00:07:35.780 10485.760 - 10536.172: 98.8531% ( 10) 00:07:35.780 10536.172 - 10586.585: 98.8697% ( 3) 00:07:35.780 10586.585 - 10636.997: 98.8808% ( 2) 00:07:35.780 10636.997 - 10687.409: 98.8974% ( 3) 00:07:35.780 10687.409 - 10737.822: 98.9029% ( 1) 00:07:35.780 10737.822 - 10788.234: 98.9140% ( 2) 00:07:35.780 10788.234 - 10838.646: 98.9195% ( 1) 00:07:35.780 10838.646 - 10889.058: 98.9306% ( 2) 00:07:35.780 10889.058 - 10939.471: 98.9362% ( 1) 00:07:35.780 12905.551 - 13006.375: 98.9694% ( 6) 00:07:35.780 13006.375 - 13107.200: 99.0414% ( 13) 00:07:35.780 13107.200 - 13208.025: 99.2409% ( 36) 00:07:35.780 13208.025 - 13308.849: 99.2797% ( 7) 00:07:35.780 13308.849 - 13409.674: 99.2908% ( 2) 00:07:35.780 16535.237 - 16636.062: 99.3074% ( 3) 00:07:35.780 16636.062 - 16736.886: 99.3628% ( 10) 00:07:35.780 16736.886 - 16837.711: 99.4127% ( 9) 00:07:35.780 16837.711 - 16938.535: 99.4570% ( 8) 00:07:35.780 16938.535 - 17039.360: 99.5069% ( 9) 00:07:35.780 17039.360 - 17140.185: 99.5346% ( 5) 00:07:35.780 17140.185 - 17241.009: 99.5623% ( 5) 00:07:35.780 17241.009 - 17341.834: 99.5900% ( 5) 00:07:35.780 17341.834 - 17442.658: 99.6121% ( 4) 00:07:35.780 17442.658 - 17543.483: 99.6398% ( 5) 00:07:35.780 17543.483 - 17644.308: 99.6454% ( 1) 00:07:35.780 21072.345 - 21173.169: 99.6676% ( 4) 00:07:35.780 21173.169 - 21273.994: 99.6953% ( 5) 00:07:35.780 21273.994 - 21374.818: 99.7008% ( 1) 00:07:35.780 21576.468 - 21677.292: 99.7285% ( 5) 00:07:35.780 21677.292 - 21778.117: 99.7617% ( 6) 00:07:35.780 21778.117 - 21878.942: 99.7895% ( 5) 00:07:35.780 21878.942 - 21979.766: 99.8172% ( 5) 00:07:35.780 21979.766 - 22080.591: 99.8504% ( 6) 00:07:35.780 22080.591 - 22181.415: 99.8781% ( 5) 00:07:35.780 22181.415 - 22282.240: 99.9113% ( 6) 00:07:35.780 22282.240 - 22383.065: 99.9391% ( 5) 00:07:35.780 22383.065 - 22483.889: 99.9723% ( 6) 00:07:35.780 22483.889 - 22584.714: 100.0000% ( 5) 00:07:35.780 00:07:35.780 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:35.780 ============================================================================== 00:07:35.780 Range in us Cumulative IO count 00:07:35.780 3780.923 - 3806.129: 0.0055% ( 1) 00:07:35.780 3881.748 - 3906.954: 0.0111% ( 1) 00:07:35.780 3906.954 - 3932.160: 0.0277% ( 3) 00:07:35.780 3932.160 - 3957.366: 0.0388% ( 2) 00:07:35.780 3957.366 - 3982.572: 0.0997% ( 11) 00:07:35.780 3982.572 - 4007.778: 0.1662% ( 12) 00:07:35.780 4007.778 - 4032.985: 0.2327% ( 12) 00:07:35.780 4032.985 - 4058.191: 0.2715% ( 7) 00:07:35.780 4058.191 - 4083.397: 0.2881% ( 3) 00:07:35.780 4083.397 - 4108.603: 0.2992% ( 2) 00:07:35.780 4108.603 - 4133.809: 0.3103% ( 2) 00:07:35.780 4133.809 - 4159.015: 0.3214% ( 2) 00:07:35.780 4159.015 - 4184.222: 0.3269% ( 1) 00:07:35.780 4184.222 - 4209.428: 0.3380% ( 2) 00:07:35.780 4209.428 - 4234.634: 0.3435% ( 1) 00:07:35.780 4234.634 - 4259.840: 0.3546% ( 2) 00:07:35.780 5494.942 - 5520.148: 0.3712% ( 3) 00:07:35.780 5520.148 - 5545.354: 0.3823% ( 2) 00:07:35.780 5545.354 - 5570.560: 0.4211% ( 7) 00:07:35.780 5570.560 - 5595.766: 0.4654% ( 8) 00:07:35.780 5595.766 - 5620.972: 0.5264% ( 11) 00:07:35.780 5620.972 - 5646.178: 0.5818% ( 10) 00:07:35.780 5646.178 - 5671.385: 0.6316% ( 9) 00:07:35.780 5671.385 - 5696.591: 0.6427% ( 2) 00:07:35.780 5696.591 - 5721.797: 0.6594% ( 3) 00:07:35.780 5721.797 - 5747.003: 0.6704% ( 2) 00:07:35.780 5747.003 - 5772.209: 0.6871% ( 3) 00:07:35.780 5772.209 - 5797.415: 0.6981% ( 2) 00:07:35.780 5797.415 - 5822.622: 0.7203% ( 4) 00:07:35.780 5822.622 - 5847.828: 0.7757% ( 10) 00:07:35.781 5847.828 - 5873.034: 0.8200% ( 8) 00:07:35.781 5873.034 - 5898.240: 0.8588% ( 7) 00:07:35.781 5898.240 - 5923.446: 0.9031% ( 8) 00:07:35.781 5923.446 - 5948.652: 0.9696% ( 12) 00:07:35.781 5948.652 - 5973.858: 1.2467% ( 50) 00:07:35.781 5973.858 - 5999.065: 1.3409% ( 17) 00:07:35.781 5999.065 - 6024.271: 1.4184% ( 14) 00:07:35.781 6024.271 - 6049.477: 1.5071% ( 16) 00:07:35.781 6049.477 - 6074.683: 1.6456% ( 25) 00:07:35.781 6074.683 - 6099.889: 1.8728% ( 41) 00:07:35.781 6099.889 - 6125.095: 2.1443% ( 49) 00:07:35.781 6125.095 - 6150.302: 2.4102% ( 48) 00:07:35.781 6150.302 - 6175.508: 2.7815% ( 67) 00:07:35.781 6175.508 - 6200.714: 3.1749% ( 71) 00:07:35.781 6200.714 - 6225.920: 3.8398% ( 120) 00:07:35.781 6225.920 - 6251.126: 4.5545% ( 129) 00:07:35.781 6251.126 - 6276.332: 5.3912% ( 151) 00:07:35.781 6276.332 - 6301.538: 6.3276% ( 169) 00:07:35.781 6301.538 - 6326.745: 7.2972% ( 175) 00:07:35.781 6326.745 - 6351.951: 8.3112% ( 183) 00:07:35.781 6351.951 - 6377.157: 9.8848% ( 284) 00:07:35.781 6377.157 - 6402.363: 11.2810% ( 252) 00:07:35.781 6402.363 - 6427.569: 13.0264% ( 315) 00:07:35.781 6427.569 - 6452.775: 15.0543% ( 366) 00:07:35.781 6452.775 - 6503.188: 19.5479% ( 811) 00:07:35.781 6503.188 - 6553.600: 24.7285% ( 935) 00:07:35.781 6553.600 - 6604.012: 31.4439% ( 1212) 00:07:35.781 6604.012 - 6654.425: 38.1649% ( 1213) 00:07:35.781 6654.425 - 6704.837: 43.8442% ( 1025) 00:07:35.781 6704.837 - 6755.249: 48.4929% ( 839) 00:07:35.781 6755.249 - 6805.662: 52.6596% ( 752) 00:07:35.781 6805.662 - 6856.074: 56.4716% ( 688) 00:07:35.781 6856.074 - 6906.486: 61.0926% ( 834) 00:07:35.781 6906.486 - 6956.898: 64.2343% ( 567) 00:07:35.781 6956.898 - 7007.311: 67.2872% ( 551) 00:07:35.781 7007.311 - 7057.723: 69.4371% ( 388) 00:07:35.781 7057.723 - 7108.135: 71.6035% ( 391) 00:07:35.781 7108.135 - 7158.548: 73.5262% ( 347) 00:07:35.781 7158.548 - 7208.960: 75.4488% ( 347) 00:07:35.781 7208.960 - 7259.372: 77.1166% ( 301) 00:07:35.781 7259.372 - 7309.785: 78.4464% ( 240) 00:07:35.781 7309.785 - 7360.197: 79.9147% ( 265) 00:07:35.781 7360.197 - 7410.609: 80.8289% ( 165) 00:07:35.781 7410.609 - 7461.022: 81.7875% ( 173) 00:07:35.781 7461.022 - 7511.434: 82.8790% ( 197) 00:07:35.781 7511.434 - 7561.846: 83.8431% ( 174) 00:07:35.781 7561.846 - 7612.258: 84.6243% ( 141) 00:07:35.781 7612.258 - 7662.671: 85.3779% ( 136) 00:07:35.781 7662.671 - 7713.083: 85.9818% ( 109) 00:07:35.781 7713.083 - 7763.495: 86.5969% ( 111) 00:07:35.781 7763.495 - 7813.908: 87.1454% ( 99) 00:07:35.781 7813.908 - 7864.320: 87.5554% ( 74) 00:07:35.781 7864.320 - 7914.732: 87.9820% ( 77) 00:07:35.781 7914.732 - 7965.145: 88.6359% ( 118) 00:07:35.781 7965.145 - 8015.557: 89.1456% ( 92) 00:07:35.781 8015.557 - 8065.969: 89.5335% ( 70) 00:07:35.781 8065.969 - 8116.382: 89.9878% ( 82) 00:07:35.781 8116.382 - 8166.794: 90.4532% ( 84) 00:07:35.781 8166.794 - 8217.206: 91.0516% ( 108) 00:07:35.781 8217.206 - 8267.618: 91.4783% ( 77) 00:07:35.781 8267.618 - 8318.031: 91.9714% ( 89) 00:07:35.781 8318.031 - 8368.443: 92.3703% ( 72) 00:07:35.781 8368.443 - 8418.855: 92.7970% ( 77) 00:07:35.781 8418.855 - 8469.268: 93.2181% ( 76) 00:07:35.781 8469.268 - 8519.680: 93.7001% ( 87) 00:07:35.781 8519.680 - 8570.092: 94.2154% ( 93) 00:07:35.781 8570.092 - 8620.505: 94.7086% ( 89) 00:07:35.781 8620.505 - 8670.917: 95.3402% ( 114) 00:07:35.781 8670.917 - 8721.329: 95.8001% ( 83) 00:07:35.781 8721.329 - 8771.742: 96.0494% ( 45) 00:07:35.781 8771.742 - 8822.154: 96.2877% ( 43) 00:07:35.781 8822.154 - 8872.566: 96.6146% ( 59) 00:07:35.781 8872.566 - 8922.978: 96.7199% ( 19) 00:07:35.781 8922.978 - 8973.391: 96.8362% ( 21) 00:07:35.781 8973.391 - 9023.803: 96.9359% ( 18) 00:07:35.781 9023.803 - 9074.215: 97.0246% ( 16) 00:07:35.781 9074.215 - 9124.628: 97.0911% ( 12) 00:07:35.781 9124.628 - 9175.040: 97.3183% ( 41) 00:07:35.781 9175.040 - 9225.452: 97.3848% ( 12) 00:07:35.781 9225.452 - 9275.865: 97.4457% ( 11) 00:07:35.781 9275.865 - 9326.277: 97.5122% ( 12) 00:07:35.781 9326.277 - 9376.689: 97.6064% ( 17) 00:07:35.781 9376.689 - 9427.102: 97.6895% ( 15) 00:07:35.781 9427.102 - 9477.514: 97.8446% ( 28) 00:07:35.781 9477.514 - 9527.926: 97.9333% ( 16) 00:07:35.781 9527.926 - 9578.338: 98.0552% ( 22) 00:07:35.781 9578.338 - 9628.751: 98.1605% ( 19) 00:07:35.781 9628.751 - 9679.163: 98.1937% ( 6) 00:07:35.781 9679.163 - 9729.575: 98.2270% ( 6) 00:07:35.781 9729.575 - 9779.988: 98.2713% ( 8) 00:07:35.781 9779.988 - 9830.400: 98.2934% ( 4) 00:07:35.781 9830.400 - 9880.812: 98.3378% ( 8) 00:07:35.781 9880.812 - 9931.225: 98.3876% ( 9) 00:07:35.781 9931.225 - 9981.637: 98.4486% ( 11) 00:07:35.781 9981.637 - 10032.049: 98.5206% ( 13) 00:07:35.781 10032.049 - 10082.462: 98.5926% ( 13) 00:07:35.781 10082.462 - 10132.874: 98.6647% ( 13) 00:07:35.781 10132.874 - 10183.286: 98.7201% ( 10) 00:07:35.781 10183.286 - 10233.698: 98.7533% ( 6) 00:07:35.781 10233.698 - 10284.111: 98.7755% ( 4) 00:07:35.781 10284.111 - 10334.523: 98.7921% ( 3) 00:07:35.781 10334.523 - 10384.935: 98.8143% ( 4) 00:07:35.781 10384.935 - 10435.348: 98.8309% ( 3) 00:07:35.781 10435.348 - 10485.760: 98.8531% ( 4) 00:07:35.781 10485.760 - 10536.172: 98.8641% ( 2) 00:07:35.781 10536.172 - 10586.585: 98.8752% ( 2) 00:07:35.781 10586.585 - 10636.997: 98.8863% ( 2) 00:07:35.781 10636.997 - 10687.409: 98.8918% ( 1) 00:07:35.781 10687.409 - 10737.822: 98.8974% ( 1) 00:07:35.781 10737.822 - 10788.234: 98.9085% ( 2) 00:07:35.781 10788.234 - 10838.646: 98.9140% ( 1) 00:07:35.781 10838.646 - 10889.058: 98.9251% ( 2) 00:07:35.781 10889.058 - 10939.471: 98.9306% ( 1) 00:07:35.781 10939.471 - 10989.883: 98.9362% ( 1) 00:07:35.781 13107.200 - 13208.025: 98.9417% ( 1) 00:07:35.781 13208.025 - 13308.849: 98.9860% ( 8) 00:07:35.781 13308.849 - 13409.674: 99.0304% ( 8) 00:07:35.781 13409.674 - 13510.498: 99.1024% ( 13) 00:07:35.781 13510.498 - 13611.323: 99.2686% ( 30) 00:07:35.781 13611.323 - 13712.148: 99.2908% ( 4) 00:07:35.781 16232.763 - 16333.588: 99.2963% ( 1) 00:07:35.781 16736.886 - 16837.711: 99.3240% ( 5) 00:07:35.781 16837.711 - 16938.535: 99.3684% ( 8) 00:07:35.781 16938.535 - 17039.360: 99.4238% ( 10) 00:07:35.781 17039.360 - 17140.185: 99.4792% ( 10) 00:07:35.781 17140.185 - 17241.009: 99.5069% ( 5) 00:07:35.781 17241.009 - 17341.834: 99.5346% ( 5) 00:07:35.781 17341.834 - 17442.658: 99.5623% ( 5) 00:07:35.781 17442.658 - 17543.483: 99.5955% ( 6) 00:07:35.781 17543.483 - 17644.308: 99.6232% ( 5) 00:07:35.781 17644.308 - 17745.132: 99.6454% ( 4) 00:07:35.781 20971.520 - 21072.345: 99.6676% ( 4) 00:07:35.781 21072.345 - 21173.169: 99.7119% ( 8) 00:07:35.781 21173.169 - 21273.994: 99.8061% ( 17) 00:07:35.781 21273.994 - 21374.818: 99.8559% ( 9) 00:07:35.781 21374.818 - 21475.643: 99.8892% ( 6) 00:07:35.781 21677.292 - 21778.117: 99.9113% ( 4) 00:07:35.781 21778.117 - 21878.942: 99.9446% ( 6) 00:07:35.781 21878.942 - 21979.766: 99.9723% ( 5) 00:07:35.781 21979.766 - 22080.591: 100.0000% ( 5) 00:07:35.781 00:07:35.781 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:35.781 ============================================================================== 00:07:35.781 Range in us Cumulative IO count 00:07:35.781 3478.449 - 3503.655: 0.0055% ( 1) 00:07:35.781 3503.655 - 3528.862: 0.0166% ( 2) 00:07:35.781 3528.862 - 3554.068: 0.0277% ( 2) 00:07:35.781 3554.068 - 3579.274: 0.0554% ( 5) 00:07:35.781 3579.274 - 3604.480: 0.0942% ( 7) 00:07:35.781 3604.480 - 3629.686: 0.0997% ( 1) 00:07:35.781 3629.686 - 3654.892: 0.1551% ( 10) 00:07:35.781 3654.892 - 3680.098: 0.2050% ( 9) 00:07:35.781 3680.098 - 3705.305: 0.2604% ( 10) 00:07:35.781 3705.305 - 3730.511: 0.2715% ( 2) 00:07:35.781 3730.511 - 3755.717: 0.2881% ( 3) 00:07:35.781 3755.717 - 3780.923: 0.2992% ( 2) 00:07:35.781 3780.923 - 3806.129: 0.3103% ( 2) 00:07:35.781 3806.129 - 3831.335: 0.3214% ( 2) 00:07:35.781 3831.335 - 3856.542: 0.3380% ( 3) 00:07:35.781 3856.542 - 3881.748: 0.3491% ( 2) 00:07:35.781 3881.748 - 3906.954: 0.3546% ( 1) 00:07:35.781 5217.674 - 5242.880: 0.3602% ( 1) 00:07:35.781 5242.880 - 5268.086: 0.3712% ( 2) 00:07:35.781 5268.086 - 5293.292: 0.3823% ( 2) 00:07:35.781 5293.292 - 5318.498: 0.4156% ( 6) 00:07:35.781 5318.498 - 5343.705: 0.4599% ( 8) 00:07:35.781 5343.705 - 5368.911: 0.5208% ( 11) 00:07:35.781 5368.911 - 5394.117: 0.5762% ( 10) 00:07:35.781 5394.117 - 5419.323: 0.6150% ( 7) 00:07:35.781 5419.323 - 5444.529: 0.6316% ( 3) 00:07:35.781 5444.529 - 5469.735: 0.6427% ( 2) 00:07:35.781 5469.735 - 5494.942: 0.6538% ( 2) 00:07:35.781 5494.942 - 5520.148: 0.6649% ( 2) 00:07:35.781 5520.148 - 5545.354: 0.6760% ( 2) 00:07:35.781 5545.354 - 5570.560: 0.6871% ( 2) 00:07:35.781 5570.560 - 5595.766: 0.6981% ( 2) 00:07:35.781 5595.766 - 5620.972: 0.7092% ( 2) 00:07:35.781 5671.385 - 5696.591: 0.7148% ( 1) 00:07:35.781 5721.797 - 5747.003: 0.7203% ( 1) 00:07:35.781 5747.003 - 5772.209: 0.7258% ( 1) 00:07:35.781 5797.415 - 5822.622: 0.7314% ( 1) 00:07:35.781 5847.828 - 5873.034: 0.7535% ( 4) 00:07:35.781 5873.034 - 5898.240: 0.7757% ( 4) 00:07:35.781 5898.240 - 5923.446: 0.8034% ( 5) 00:07:35.781 5923.446 - 5948.652: 0.8699% ( 12) 00:07:35.781 5948.652 - 5973.858: 0.9530% ( 15) 00:07:35.782 5973.858 - 5999.065: 1.0638% ( 20) 00:07:35.782 5999.065 - 6024.271: 1.2301% ( 30) 00:07:35.782 6024.271 - 6049.477: 1.3797% ( 27) 00:07:35.782 6049.477 - 6074.683: 1.7232% ( 62) 00:07:35.782 6074.683 - 6099.889: 2.0335% ( 56) 00:07:35.782 6099.889 - 6125.095: 2.4934% ( 83) 00:07:35.782 6125.095 - 6150.302: 2.8590% ( 66) 00:07:35.782 6150.302 - 6175.508: 3.2746% ( 75) 00:07:35.782 6175.508 - 6200.714: 3.8121% ( 97) 00:07:35.782 6200.714 - 6225.920: 4.3163% ( 91) 00:07:35.782 6225.920 - 6251.126: 4.9645% ( 117) 00:07:35.782 6251.126 - 6276.332: 5.7901% ( 149) 00:07:35.782 6276.332 - 6301.538: 6.6489% ( 155) 00:07:35.782 6301.538 - 6326.745: 7.5742% ( 167) 00:07:35.782 6326.745 - 6351.951: 8.4996% ( 167) 00:07:35.782 6351.951 - 6377.157: 9.7795% ( 231) 00:07:35.782 6377.157 - 6402.363: 11.2312% ( 262) 00:07:35.782 6402.363 - 6427.569: 12.9876% ( 317) 00:07:35.782 6427.569 - 6452.775: 14.8604% ( 338) 00:07:35.782 6452.775 - 6503.188: 19.1434% ( 773) 00:07:35.782 6503.188 - 6553.600: 24.1412% ( 902) 00:07:35.782 6553.600 - 6604.012: 30.9896% ( 1236) 00:07:35.782 6604.012 - 6654.425: 37.2285% ( 1126) 00:07:35.782 6654.425 - 6704.837: 42.7416% ( 995) 00:07:35.782 6704.837 - 6755.249: 47.6230% ( 881) 00:07:35.782 6755.249 - 6805.662: 52.5931% ( 897) 00:07:35.782 6805.662 - 6856.074: 57.0922% ( 812) 00:07:35.782 6856.074 - 6906.486: 61.0871% ( 721) 00:07:35.782 6906.486 - 6956.898: 64.3174% ( 583) 00:07:35.782 6956.898 - 7007.311: 67.1432% ( 510) 00:07:35.782 7007.311 - 7057.723: 69.6144% ( 446) 00:07:35.782 7057.723 - 7108.135: 71.8805% ( 409) 00:07:35.782 7108.135 - 7158.548: 73.6924% ( 327) 00:07:35.782 7158.548 - 7208.960: 75.3435% ( 298) 00:07:35.782 7208.960 - 7259.372: 77.1775% ( 331) 00:07:35.782 7259.372 - 7309.785: 79.0448% ( 337) 00:07:35.782 7309.785 - 7360.197: 80.2194% ( 212) 00:07:35.782 7360.197 - 7410.609: 81.2722% ( 190) 00:07:35.782 7410.609 - 7461.022: 82.4468% ( 212) 00:07:35.782 7461.022 - 7511.434: 83.3555% ( 164) 00:07:35.782 7511.434 - 7561.846: 84.2199% ( 156) 00:07:35.782 7561.846 - 7612.258: 84.7573% ( 97) 00:07:35.782 7612.258 - 7662.671: 85.4499% ( 125) 00:07:35.782 7662.671 - 7713.083: 85.8433% ( 71) 00:07:35.782 7713.083 - 7763.495: 86.2866% ( 80) 00:07:35.782 7763.495 - 7813.908: 86.7631% ( 86) 00:07:35.782 7813.908 - 7864.320: 87.4945% ( 132) 00:07:35.782 7864.320 - 7914.732: 88.2258% ( 132) 00:07:35.782 7914.732 - 7965.145: 88.8630% ( 115) 00:07:35.782 7965.145 - 8015.557: 89.2453% ( 69) 00:07:35.782 8015.557 - 8065.969: 89.5612% ( 57) 00:07:35.782 8065.969 - 8116.382: 89.8992% ( 61) 00:07:35.782 8116.382 - 8166.794: 90.3590% ( 83) 00:07:35.782 8166.794 - 8217.206: 91.2123% ( 154) 00:07:35.782 8217.206 - 8267.618: 91.8551% ( 116) 00:07:35.782 8267.618 - 8318.031: 92.5199% ( 120) 00:07:35.782 8318.031 - 8368.443: 92.9244% ( 73) 00:07:35.782 8368.443 - 8418.855: 93.2458% ( 58) 00:07:35.782 8418.855 - 8469.268: 93.6447% ( 72) 00:07:35.782 8469.268 - 8519.680: 94.0547% ( 74) 00:07:35.782 8519.680 - 8570.092: 94.4094% ( 64) 00:07:35.782 8570.092 - 8620.505: 94.6587% ( 45) 00:07:35.782 8620.505 - 8670.917: 95.1020% ( 80) 00:07:35.782 8670.917 - 8721.329: 95.3624% ( 47) 00:07:35.782 8721.329 - 8771.742: 95.5618% ( 36) 00:07:35.782 8771.742 - 8822.154: 95.8112% ( 45) 00:07:35.782 8822.154 - 8872.566: 96.2101% ( 72) 00:07:35.782 8872.566 - 8922.978: 96.5093% ( 54) 00:07:35.782 8922.978 - 8973.391: 96.7586% ( 45) 00:07:35.782 8973.391 - 9023.803: 96.9193% ( 29) 00:07:35.782 9023.803 - 9074.215: 97.0578% ( 25) 00:07:35.782 9074.215 - 9124.628: 97.2296% ( 31) 00:07:35.782 9124.628 - 9175.040: 97.4845% ( 46) 00:07:35.782 9175.040 - 9225.452: 97.6562% ( 31) 00:07:35.782 9225.452 - 9275.865: 97.8557% ( 36) 00:07:35.782 9275.865 - 9326.277: 97.9942% ( 25) 00:07:35.782 9326.277 - 9376.689: 98.0773% ( 15) 00:07:35.782 9376.689 - 9427.102: 98.2214% ( 26) 00:07:35.782 9427.102 - 9477.514: 98.3267% ( 19) 00:07:35.782 9477.514 - 9527.926: 98.3544% ( 5) 00:07:35.782 9527.926 - 9578.338: 98.3876% ( 6) 00:07:35.782 9578.338 - 9628.751: 98.4209% ( 6) 00:07:35.782 9628.751 - 9679.163: 98.4597% ( 7) 00:07:35.782 9679.163 - 9729.575: 98.4929% ( 6) 00:07:35.782 9729.575 - 9779.988: 98.5206% ( 5) 00:07:35.782 9779.988 - 9830.400: 98.5317% ( 2) 00:07:35.782 9830.400 - 9880.812: 98.5428% ( 2) 00:07:35.782 9880.812 - 9931.225: 98.5483% ( 1) 00:07:35.782 9931.225 - 9981.637: 98.5594% ( 2) 00:07:35.782 9981.637 - 10032.049: 98.5649% ( 1) 00:07:35.782 10032.049 - 10082.462: 98.5816% ( 3) 00:07:35.782 10082.462 - 10132.874: 98.5871% ( 1) 00:07:35.782 10183.286 - 10233.698: 98.5982% ( 2) 00:07:35.782 10233.698 - 10284.111: 98.6093% ( 2) 00:07:35.782 10284.111 - 10334.523: 98.6148% ( 1) 00:07:35.782 10334.523 - 10384.935: 98.6259% ( 2) 00:07:35.782 10384.935 - 10435.348: 98.6314% ( 1) 00:07:35.782 10435.348 - 10485.760: 98.6480% ( 3) 00:07:35.782 10485.760 - 10536.172: 98.6536% ( 1) 00:07:35.782 10536.172 - 10586.585: 98.6647% ( 2) 00:07:35.782 10586.585 - 10636.997: 98.6758% ( 2) 00:07:35.782 10636.997 - 10687.409: 98.6868% ( 2) 00:07:35.782 10687.409 - 10737.822: 98.7035% ( 3) 00:07:35.782 10788.234 - 10838.646: 98.7145% ( 2) 00:07:35.782 10838.646 - 10889.058: 98.7312% ( 3) 00:07:35.782 10889.058 - 10939.471: 98.7422% ( 2) 00:07:35.782 10939.471 - 10989.883: 98.7533% ( 2) 00:07:35.782 10989.883 - 11040.295: 98.7866% ( 6) 00:07:35.782 11040.295 - 11090.708: 98.8198% ( 6) 00:07:35.782 11090.708 - 11141.120: 98.8420% ( 4) 00:07:35.782 11141.120 - 11191.532: 98.8697% ( 5) 00:07:35.782 11191.532 - 11241.945: 98.8918% ( 4) 00:07:35.782 11241.945 - 11292.357: 98.9140% ( 4) 00:07:35.782 11292.357 - 11342.769: 98.9251% ( 2) 00:07:35.782 11342.769 - 11393.182: 98.9362% ( 2) 00:07:35.782 13308.849 - 13409.674: 99.0027% ( 12) 00:07:35.782 13409.674 - 13510.498: 99.1246% ( 22) 00:07:35.782 13510.498 - 13611.323: 99.2575% ( 24) 00:07:35.782 13611.323 - 13712.148: 99.2908% ( 6) 00:07:35.782 16535.237 - 16636.062: 99.3240% ( 6) 00:07:35.782 16636.062 - 16736.886: 99.3684% ( 8) 00:07:35.782 16736.886 - 16837.711: 99.4071% ( 7) 00:07:35.782 16837.711 - 16938.535: 99.4459% ( 7) 00:07:35.782 16938.535 - 17039.360: 99.4847% ( 7) 00:07:35.782 17039.360 - 17140.185: 99.5180% ( 6) 00:07:35.782 17140.185 - 17241.009: 99.5512% ( 6) 00:07:35.782 17241.009 - 17341.834: 99.5844% ( 6) 00:07:35.782 17341.834 - 17442.658: 99.6177% ( 6) 00:07:35.782 17442.658 - 17543.483: 99.6454% ( 5) 00:07:35.782 20467.397 - 20568.222: 99.6731% ( 5) 00:07:35.782 20568.222 - 20669.046: 99.7174% ( 8) 00:07:35.782 20669.046 - 20769.871: 99.8615% ( 26) 00:07:35.782 20769.871 - 20870.695: 99.9113% ( 9) 00:07:35.782 20870.695 - 20971.520: 99.9335% ( 4) 00:07:35.782 21173.169 - 21273.994: 99.9612% ( 5) 00:07:35.782 21273.994 - 21374.818: 99.9889% ( 5) 00:07:35.782 21374.818 - 21475.643: 100.0000% ( 2) 00:07:35.782 00:07:35.782 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:35.782 ============================================================================== 00:07:35.782 Range in us Cumulative IO count 00:07:35.782 3251.594 - 3276.800: 0.0055% ( 1) 00:07:35.782 3276.800 - 3302.006: 0.0111% ( 1) 00:07:35.782 3302.006 - 3327.212: 0.0388% ( 5) 00:07:35.782 3327.212 - 3352.418: 0.0665% ( 5) 00:07:35.782 3352.418 - 3377.625: 0.1053% ( 7) 00:07:35.782 3377.625 - 3402.831: 0.1496% ( 8) 00:07:35.782 3402.831 - 3428.037: 0.1939% ( 8) 00:07:35.782 3428.037 - 3453.243: 0.2327% ( 7) 00:07:35.782 3453.243 - 3478.449: 0.2770% ( 8) 00:07:35.782 3478.449 - 3503.655: 0.2937% ( 3) 00:07:35.783 3503.655 - 3528.862: 0.3047% ( 2) 00:07:35.783 3528.862 - 3554.068: 0.3103% ( 1) 00:07:35.783 3554.068 - 3579.274: 0.3214% ( 2) 00:07:35.783 3579.274 - 3604.480: 0.3324% ( 2) 00:07:35.783 3604.480 - 3629.686: 0.3491% ( 3) 00:07:35.783 3629.686 - 3654.892: 0.3546% ( 1) 00:07:35.783 4789.169 - 4814.375: 0.3602% ( 1) 00:07:35.783 4940.406 - 4965.612: 0.3712% ( 2) 00:07:35.783 4965.612 - 4990.818: 0.3823% ( 2) 00:07:35.783 4990.818 - 5016.025: 0.3879% ( 1) 00:07:35.783 5016.025 - 5041.231: 0.4100% ( 4) 00:07:35.783 5041.231 - 5066.437: 0.4211% ( 2) 00:07:35.783 5066.437 - 5091.643: 0.4599% ( 7) 00:07:35.783 5091.643 - 5116.849: 0.5208% ( 11) 00:07:35.783 5116.849 - 5142.055: 0.5984% ( 14) 00:07:35.783 5142.055 - 5167.262: 0.6261% ( 5) 00:07:35.783 5167.262 - 5192.468: 0.6427% ( 3) 00:07:35.783 5192.468 - 5217.674: 0.6538% ( 2) 00:07:35.783 5217.674 - 5242.880: 0.6649% ( 2) 00:07:35.783 5242.880 - 5268.086: 0.6760% ( 2) 00:07:35.783 5268.086 - 5293.292: 0.6871% ( 2) 00:07:35.783 5293.292 - 5318.498: 0.6981% ( 2) 00:07:35.783 5318.498 - 5343.705: 0.7092% ( 2) 00:07:35.783 5595.766 - 5620.972: 0.7148% ( 1) 00:07:35.783 5747.003 - 5772.209: 0.7369% ( 4) 00:07:35.783 5772.209 - 5797.415: 0.7591% ( 4) 00:07:35.783 5797.415 - 5822.622: 0.7812% ( 4) 00:07:35.783 5822.622 - 5847.828: 0.8034% ( 4) 00:07:35.783 5847.828 - 5873.034: 0.8256% ( 4) 00:07:35.783 5873.034 - 5898.240: 0.8477% ( 4) 00:07:35.783 5898.240 - 5923.446: 0.8699% ( 4) 00:07:35.783 5923.446 - 5948.652: 1.0084% ( 25) 00:07:35.783 5948.652 - 5973.858: 1.0472% ( 7) 00:07:35.783 5973.858 - 5999.065: 1.1137% ( 12) 00:07:35.783 5999.065 - 6024.271: 1.1802% ( 12) 00:07:35.783 6024.271 - 6049.477: 1.3021% ( 22) 00:07:35.783 6049.477 - 6074.683: 1.4849% ( 33) 00:07:35.783 6074.683 - 6099.889: 1.7564% ( 49) 00:07:35.783 6099.889 - 6125.095: 2.0612% ( 55) 00:07:35.783 6125.095 - 6150.302: 2.6263% ( 102) 00:07:35.783 6150.302 - 6175.508: 2.9643% ( 61) 00:07:35.783 6175.508 - 6200.714: 3.4852% ( 94) 00:07:35.783 6200.714 - 6225.920: 4.1944% ( 128) 00:07:35.783 6225.920 - 6251.126: 4.8205% ( 113) 00:07:35.783 6251.126 - 6276.332: 5.7070% ( 160) 00:07:35.783 6276.332 - 6301.538: 6.6711% ( 174) 00:07:35.783 6301.538 - 6326.745: 7.4801% ( 146) 00:07:35.783 6326.745 - 6351.951: 8.3389% ( 155) 00:07:35.783 6351.951 - 6377.157: 9.5689% ( 222) 00:07:35.783 6377.157 - 6402.363: 11.0040% ( 259) 00:07:35.783 6402.363 - 6427.569: 12.6662% ( 300) 00:07:35.783 6427.569 - 6452.775: 14.3174% ( 298) 00:07:35.783 6452.775 - 6503.188: 18.6613% ( 784) 00:07:35.783 6503.188 - 6553.600: 24.1689% ( 994) 00:07:35.783 6553.600 - 6604.012: 30.9453% ( 1223) 00:07:35.783 6604.012 - 6654.425: 37.2230% ( 1133) 00:07:35.783 6654.425 - 6704.837: 42.8191% ( 1010) 00:07:35.783 6704.837 - 6755.249: 47.8834% ( 914) 00:07:35.783 6755.249 - 6805.662: 52.4546% ( 825) 00:07:35.783 6805.662 - 6856.074: 56.7099% ( 768) 00:07:35.783 6856.074 - 6906.486: 60.6549% ( 712) 00:07:35.783 6906.486 - 6956.898: 64.2287% ( 645) 00:07:35.783 6956.898 - 7007.311: 67.0988% ( 518) 00:07:35.783 7007.311 - 7057.723: 69.3595% ( 408) 00:07:35.783 7057.723 - 7108.135: 71.6811% ( 419) 00:07:35.783 7108.135 - 7158.548: 73.5539% ( 338) 00:07:35.783 7158.548 - 7208.960: 75.1053% ( 280) 00:07:35.783 7208.960 - 7259.372: 77.0168% ( 345) 00:07:35.783 7259.372 - 7309.785: 78.5960% ( 285) 00:07:35.783 7309.785 - 7360.197: 79.5379% ( 170) 00:07:35.783 7360.197 - 7410.609: 80.8898% ( 244) 00:07:35.783 7410.609 - 7461.022: 82.0368% ( 207) 00:07:35.783 7461.022 - 7511.434: 83.0009% ( 174) 00:07:35.783 7511.434 - 7561.846: 84.0592% ( 191) 00:07:35.783 7561.846 - 7612.258: 84.9900% ( 168) 00:07:35.783 7612.258 - 7662.671: 85.7824% ( 143) 00:07:35.783 7662.671 - 7713.083: 86.4195% ( 115) 00:07:35.783 7713.083 - 7763.495: 87.1398% ( 130) 00:07:35.783 7763.495 - 7813.908: 87.5277% ( 70) 00:07:35.783 7813.908 - 7864.320: 87.9156% ( 70) 00:07:35.783 7864.320 - 7914.732: 88.3477% ( 78) 00:07:35.783 7914.732 - 7965.145: 88.9683% ( 112) 00:07:35.783 7965.145 - 8015.557: 89.4781% ( 92) 00:07:35.783 8015.557 - 8065.969: 90.2371% ( 137) 00:07:35.783 8065.969 - 8116.382: 90.8411% ( 109) 00:07:35.783 8116.382 - 8166.794: 91.3896% ( 99) 00:07:35.783 8166.794 - 8217.206: 91.9105% ( 94) 00:07:35.783 8217.206 - 8267.618: 92.4202% ( 92) 00:07:35.783 8267.618 - 8318.031: 92.6585% ( 43) 00:07:35.783 8318.031 - 8368.443: 92.9466% ( 52) 00:07:35.783 8368.443 - 8418.855: 93.4120% ( 84) 00:07:35.783 8418.855 - 8469.268: 93.7445% ( 60) 00:07:35.783 8469.268 - 8519.680: 94.0160% ( 49) 00:07:35.783 8519.680 - 8570.092: 94.3262% ( 56) 00:07:35.783 8570.092 - 8620.505: 94.6365% ( 56) 00:07:35.783 8620.505 - 8670.917: 94.8914% ( 46) 00:07:35.783 8670.917 - 8721.329: 95.1574% ( 48) 00:07:35.783 8721.329 - 8771.742: 95.3291% ( 31) 00:07:35.783 8771.742 - 8822.154: 95.8444% ( 93) 00:07:35.783 8822.154 - 8872.566: 96.2434% ( 72) 00:07:35.783 8872.566 - 8922.978: 96.5703% ( 59) 00:07:35.783 8922.978 - 8973.391: 96.8528% ( 51) 00:07:35.783 8973.391 - 9023.803: 97.1465% ( 53) 00:07:35.783 9023.803 - 9074.215: 97.4014% ( 46) 00:07:35.783 9074.215 - 9124.628: 97.7061% ( 55) 00:07:35.783 9124.628 - 9175.040: 97.9610% ( 46) 00:07:35.783 9175.040 - 9225.452: 98.2103% ( 45) 00:07:35.783 9225.452 - 9275.865: 98.3544% ( 26) 00:07:35.783 9275.865 - 9326.277: 98.4098% ( 10) 00:07:35.783 9326.277 - 9376.689: 98.4652% ( 10) 00:07:35.783 9376.689 - 9427.102: 98.5040% ( 7) 00:07:35.783 9427.102 - 9477.514: 98.5262% ( 4) 00:07:35.783 9477.514 - 9527.926: 98.5539% ( 5) 00:07:35.783 9527.926 - 9578.338: 98.5649% ( 2) 00:07:35.783 9578.338 - 9628.751: 98.5816% ( 3) 00:07:35.783 11090.708 - 11141.120: 98.5871% ( 1) 00:07:35.783 11241.945 - 11292.357: 98.5926% ( 1) 00:07:35.783 11342.769 - 11393.182: 98.6037% ( 2) 00:07:35.783 11393.182 - 11443.594: 98.6148% ( 2) 00:07:35.783 11443.594 - 11494.006: 98.6314% ( 3) 00:07:35.783 11494.006 - 11544.418: 98.6425% ( 2) 00:07:35.783 11544.418 - 11594.831: 98.6536% ( 2) 00:07:35.783 11594.831 - 11645.243: 98.6702% ( 3) 00:07:35.783 11645.243 - 11695.655: 98.6813% ( 2) 00:07:35.783 11695.655 - 11746.068: 98.6924% ( 2) 00:07:35.783 11746.068 - 11796.480: 98.7035% ( 2) 00:07:35.783 11796.480 - 11846.892: 98.7145% ( 2) 00:07:35.783 11846.892 - 11897.305: 98.7312% ( 3) 00:07:35.783 11897.305 - 11947.717: 98.7589% ( 5) 00:07:35.783 11947.717 - 11998.129: 98.7921% ( 6) 00:07:35.783 11998.129 - 12048.542: 98.8198% ( 5) 00:07:35.783 12048.542 - 12098.954: 98.8420% ( 4) 00:07:35.783 12098.954 - 12149.366: 98.8586% ( 3) 00:07:35.783 12149.366 - 12199.778: 98.8752% ( 3) 00:07:35.783 12199.778 - 12250.191: 98.8974% ( 4) 00:07:35.783 12250.191 - 12300.603: 98.9195% ( 4) 00:07:35.783 12300.603 - 12351.015: 98.9306% ( 2) 00:07:35.783 12351.015 - 12401.428: 98.9362% ( 1) 00:07:35.783 12451.840 - 12502.252: 98.9417% ( 1) 00:07:35.783 12502.252 - 12552.665: 98.9639% ( 4) 00:07:35.783 12552.665 - 12603.077: 98.9805% ( 3) 00:07:35.783 12603.077 - 12653.489: 99.0027% ( 4) 00:07:35.783 12653.489 - 12703.902: 99.0193% ( 3) 00:07:35.783 12703.902 - 12754.314: 99.0414% ( 4) 00:07:35.783 12754.314 - 12804.726: 99.0636% ( 4) 00:07:35.783 12804.726 - 12855.138: 99.1467% ( 15) 00:07:35.783 12855.138 - 12905.551: 99.1578% ( 2) 00:07:35.783 12905.551 - 13006.375: 99.1744% ( 3) 00:07:35.783 13006.375 - 13107.200: 99.1910% ( 3) 00:07:35.783 13107.200 - 13208.025: 99.2132% ( 4) 00:07:35.783 13208.025 - 13308.849: 99.2298% ( 3) 00:07:35.783 13308.849 - 13409.674: 99.2465% ( 3) 00:07:35.783 13409.674 - 13510.498: 99.2575% ( 2) 00:07:35.783 13510.498 - 13611.323: 99.2742% ( 3) 00:07:35.783 13611.323 - 13712.148: 99.2908% ( 3) 00:07:35.783 16031.114 - 16131.938: 99.2963% ( 1) 00:07:35.783 16232.763 - 16333.588: 99.3019% ( 1) 00:07:35.783 16333.588 - 16434.412: 99.3351% ( 6) 00:07:35.783 16434.412 - 16535.237: 99.3850% ( 9) 00:07:35.783 16535.237 - 16636.062: 99.4348% ( 9) 00:07:35.783 16636.062 - 16736.886: 99.4792% ( 8) 00:07:35.783 16736.886 - 16837.711: 99.5124% ( 6) 00:07:35.783 16837.711 - 16938.535: 99.5401% ( 5) 00:07:35.783 16938.535 - 17039.360: 99.5734% ( 6) 00:07:35.783 17039.360 - 17140.185: 99.6011% ( 5) 00:07:35.783 17140.185 - 17241.009: 99.6343% ( 6) 00:07:35.783 17241.009 - 17341.834: 99.6454% ( 2) 00:07:35.783 20064.098 - 20164.923: 99.8726% ( 41) 00:07:35.783 20164.923 - 20265.748: 99.9169% ( 8) 00:07:35.783 20568.222 - 20669.046: 99.9391% ( 4) 00:07:35.783 20669.046 - 20769.871: 99.9668% ( 5) 00:07:35.783 20769.871 - 20870.695: 99.9945% ( 5) 00:07:35.783 20870.695 - 20971.520: 100.0000% ( 1) 00:07:35.783 00:07:35.783 04:57:04 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:35.783 00:07:35.783 real 0m2.417s 00:07:35.783 user 0m2.172s 00:07:35.783 sys 0m0.148s 00:07:35.783 04:57:04 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.783 04:57:04 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.783 ************************************ 00:07:35.783 END TEST nvme_perf 00:07:35.783 ************************************ 00:07:35.783 04:57:05 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:35.783 04:57:05 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:35.783 04:57:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.783 04:57:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.784 ************************************ 00:07:35.784 START TEST nvme_hello_world 00:07:35.784 ************************************ 00:07:35.784 04:57:05 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:36.043 Initializing NVMe Controllers 00:07:36.043 Attached to 0000:00:13.0 00:07:36.043 Namespace ID: 1 size: 1GB 00:07:36.043 Attached to 0000:00:10.0 00:07:36.043 Namespace ID: 1 size: 6GB 00:07:36.043 Attached to 0000:00:11.0 00:07:36.043 Namespace ID: 1 size: 5GB 00:07:36.043 Attached to 0000:00:12.0 00:07:36.043 Namespace ID: 1 size: 4GB 00:07:36.043 Namespace ID: 2 size: 4GB 00:07:36.043 Namespace ID: 3 size: 4GB 00:07:36.043 Initialization complete. 00:07:36.043 INFO: using host memory buffer for IO 00:07:36.043 Hello world! 00:07:36.043 INFO: using host memory buffer for IO 00:07:36.043 Hello world! 00:07:36.043 INFO: using host memory buffer for IO 00:07:36.043 Hello world! 00:07:36.043 INFO: using host memory buffer for IO 00:07:36.043 Hello world! 00:07:36.043 INFO: using host memory buffer for IO 00:07:36.043 Hello world! 00:07:36.043 INFO: using host memory buffer for IO 00:07:36.043 Hello world! 00:07:36.043 00:07:36.043 real 0m0.184s 00:07:36.043 user 0m0.066s 00:07:36.043 sys 0m0.071s 00:07:36.043 04:57:05 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.043 04:57:05 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:36.043 ************************************ 00:07:36.043 END TEST nvme_hello_world 00:07:36.043 ************************************ 00:07:36.043 04:57:05 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:36.043 04:57:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.043 04:57:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.043 04:57:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.043 ************************************ 00:07:36.043 START TEST nvme_sgl 00:07:36.043 ************************************ 00:07:36.043 04:57:05 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:36.347 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:36.347 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:36.347 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:36.347 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:36.347 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:36.347 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:36.347 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:36.347 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:36.347 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:36.347 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:36.347 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:36.347 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:36.347 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:36.347 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:36.347 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:36.347 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:36.347 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:36.347 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:36.347 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:36.347 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:36.347 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:36.347 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:36.347 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:36.347 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:36.347 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:36.347 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:36.347 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:36.347 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:36.347 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:36.347 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:36.347 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:36.347 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:36.347 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:36.347 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:36.347 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:36.347 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:36.347 NVMe Readv/Writev Request test 00:07:36.347 Attached to 0000:00:13.0 00:07:36.347 Attached to 0000:00:10.0 00:07:36.347 Attached to 0000:00:11.0 00:07:36.347 Attached to 0000:00:12.0 00:07:36.347 0000:00:10.0: build_io_request_2 test passed 00:07:36.347 0000:00:10.0: build_io_request_4 test passed 00:07:36.347 0000:00:10.0: build_io_request_5 test passed 00:07:36.347 0000:00:10.0: build_io_request_6 test passed 00:07:36.347 0000:00:10.0: build_io_request_7 test passed 00:07:36.347 0000:00:10.0: build_io_request_10 test passed 00:07:36.347 0000:00:11.0: build_io_request_2 test passed 00:07:36.347 0000:00:11.0: build_io_request_4 test passed 00:07:36.347 0000:00:11.0: build_io_request_5 test passed 00:07:36.347 0000:00:11.0: build_io_request_6 test passed 00:07:36.347 0000:00:11.0: build_io_request_7 test passed 00:07:36.347 0000:00:11.0: build_io_request_10 test passed 00:07:36.347 Cleaning up... 00:07:36.347 00:07:36.348 real 0m0.232s 00:07:36.348 user 0m0.122s 00:07:36.348 sys 0m0.064s 00:07:36.348 04:57:05 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.348 04:57:05 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:36.348 ************************************ 00:07:36.348 END TEST nvme_sgl 00:07:36.348 ************************************ 00:07:36.348 04:57:05 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:36.348 04:57:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.348 04:57:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.348 04:57:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.348 ************************************ 00:07:36.348 START TEST nvme_e2edp 00:07:36.348 ************************************ 00:07:36.348 04:57:05 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:36.605 NVMe Write/Read with End-to-End data protection test 00:07:36.605 Attached to 0000:00:13.0 00:07:36.605 Attached to 0000:00:10.0 00:07:36.605 Attached to 0000:00:11.0 00:07:36.605 Attached to 0000:00:12.0 00:07:36.605 Cleaning up... 00:07:36.605 00:07:36.605 real 0m0.181s 00:07:36.605 user 0m0.061s 00:07:36.605 sys 0m0.078s 00:07:36.605 04:57:05 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.605 04:57:05 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:36.605 ************************************ 00:07:36.605 END TEST nvme_e2edp 00:07:36.605 ************************************ 00:07:36.605 04:57:05 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:36.605 04:57:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.605 04:57:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.605 04:57:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.605 ************************************ 00:07:36.605 START TEST nvme_reserve 00:07:36.605 ************************************ 00:07:36.605 04:57:05 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:36.605 ===================================================== 00:07:36.605 NVMe Controller at PCI bus 0, device 19, function 0 00:07:36.605 ===================================================== 00:07:36.605 Reservations: Not Supported 00:07:36.605 ===================================================== 00:07:36.605 NVMe Controller at PCI bus 0, device 16, function 0 00:07:36.605 ===================================================== 00:07:36.605 Reservations: Not Supported 00:07:36.605 ===================================================== 00:07:36.605 NVMe Controller at PCI bus 0, device 17, function 0 00:07:36.605 ===================================================== 00:07:36.605 Reservations: Not Supported 00:07:36.605 ===================================================== 00:07:36.605 NVMe Controller at PCI bus 0, device 18, function 0 00:07:36.605 ===================================================== 00:07:36.605 Reservations: Not Supported 00:07:36.605 Reservation test passed 00:07:36.863 00:07:36.863 real 0m0.163s 00:07:36.863 user 0m0.058s 00:07:36.863 sys 0m0.070s 00:07:36.863 04:57:05 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.863 04:57:05 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:36.863 ************************************ 00:07:36.863 END TEST nvme_reserve 00:07:36.863 ************************************ 00:07:36.863 04:57:05 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:36.863 04:57:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.863 04:57:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.863 04:57:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.863 ************************************ 00:07:36.863 START TEST nvme_err_injection 00:07:36.863 ************************************ 00:07:36.863 04:57:05 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:36.863 NVMe Error Injection test 00:07:36.863 Attached to 0000:00:13.0 00:07:36.863 Attached to 0000:00:10.0 00:07:36.863 Attached to 0000:00:11.0 00:07:36.863 Attached to 0000:00:12.0 00:07:36.863 0000:00:13.0: get features failed as expected 00:07:36.863 0000:00:10.0: get features failed as expected 00:07:36.863 0000:00:11.0: get features failed as expected 00:07:36.863 0000:00:12.0: get features failed as expected 00:07:36.863 0000:00:13.0: get features successfully as expected 00:07:36.863 0000:00:10.0: get features successfully as expected 00:07:36.863 0000:00:11.0: get features successfully as expected 00:07:36.863 0000:00:12.0: get features successfully as expected 00:07:36.863 0000:00:13.0: read failed as expected 00:07:36.863 0000:00:10.0: read failed as expected 00:07:36.863 0000:00:11.0: read failed as expected 00:07:36.863 0000:00:12.0: read failed as expected 00:07:36.863 0000:00:10.0: read successfully as expected 00:07:36.863 0000:00:11.0: read successfully as expected 00:07:36.863 0000:00:12.0: read successfully as expected 00:07:36.863 0000:00:13.0: read successfully as expected 00:07:36.863 Cleaning up... 00:07:36.863 00:07:36.863 real 0m0.187s 00:07:36.863 user 0m0.071s 00:07:36.863 sys 0m0.071s 00:07:36.863 04:57:06 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.863 04:57:06 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:36.863 ************************************ 00:07:36.863 END TEST nvme_err_injection 00:07:36.863 ************************************ 00:07:37.121 04:57:06 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:37.121 04:57:06 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:37.121 04:57:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.121 04:57:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.121 ************************************ 00:07:37.121 START TEST nvme_overhead 00:07:37.121 ************************************ 00:07:37.121 04:57:06 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:38.053 Initializing NVMe Controllers 00:07:38.053 Attached to 0000:00:13.0 00:07:38.053 Attached to 0000:00:10.0 00:07:38.053 Attached to 0000:00:11.0 00:07:38.053 Attached to 0000:00:12.0 00:07:38.053 Initialization complete. Launching workers. 00:07:38.053 submit (in ns) avg, min, max = 11344.7, 9940.0, 105022.3 00:07:38.053 complete (in ns) avg, min, max = 7619.4, 7299.2, 46916.9 00:07:38.053 00:07:38.053 Submit histogram 00:07:38.053 ================ 00:07:38.053 Range in us Cumulative Count 00:07:38.053 9.895 - 9.945: 0.0053% ( 1) 00:07:38.053 10.043 - 10.092: 0.0107% ( 1) 00:07:38.053 10.388 - 10.437: 0.0214% ( 2) 00:07:38.053 10.486 - 10.535: 0.0267% ( 1) 00:07:38.053 10.683 - 10.732: 0.0320% ( 1) 00:07:38.053 10.880 - 10.929: 0.0481% ( 3) 00:07:38.053 10.929 - 10.978: 0.2831% ( 44) 00:07:38.053 10.978 - 11.028: 2.5422% ( 423) 00:07:38.053 11.028 - 11.077: 11.6214% ( 1700) 00:07:38.053 11.077 - 11.126: 30.1004% ( 3460) 00:07:38.053 11.126 - 11.175: 52.6704% ( 4226) 00:07:38.053 11.175 - 11.225: 71.8276% ( 3587) 00:07:38.053 11.225 - 11.274: 82.8349% ( 2061) 00:07:38.053 11.274 - 11.323: 88.2504% ( 1014) 00:07:38.053 11.323 - 11.372: 90.6110% ( 442) 00:07:38.053 11.372 - 11.422: 91.9942% ( 259) 00:07:38.053 11.422 - 11.471: 92.8915% ( 168) 00:07:38.054 11.471 - 11.520: 93.6712% ( 146) 00:07:38.054 11.520 - 11.569: 94.2907% ( 116) 00:07:38.054 11.569 - 11.618: 94.7821% ( 92) 00:07:38.054 11.618 - 11.668: 95.1773% ( 74) 00:07:38.054 11.668 - 11.717: 95.4443% ( 50) 00:07:38.054 11.717 - 11.766: 95.6687% ( 42) 00:07:38.054 11.766 - 11.815: 95.8075% ( 26) 00:07:38.054 11.815 - 11.865: 95.9410% ( 25) 00:07:38.054 11.865 - 11.914: 96.0265% ( 16) 00:07:38.054 11.914 - 11.963: 96.0906% ( 12) 00:07:38.054 11.963 - 12.012: 96.1547% ( 12) 00:07:38.054 12.012 - 12.062: 96.2081% ( 10) 00:07:38.054 12.062 - 12.111: 96.2348% ( 5) 00:07:38.054 12.111 - 12.160: 96.2989% ( 12) 00:07:38.054 12.160 - 12.209: 96.3790% ( 15) 00:07:38.054 12.209 - 12.258: 96.4270% ( 9) 00:07:38.054 12.258 - 12.308: 96.4911% ( 12) 00:07:38.054 12.308 - 12.357: 96.5873% ( 18) 00:07:38.054 12.357 - 12.406: 96.6407% ( 10) 00:07:38.054 12.406 - 12.455: 96.7048% ( 12) 00:07:38.054 12.455 - 12.505: 96.7528% ( 9) 00:07:38.054 12.505 - 12.554: 96.7795% ( 5) 00:07:38.054 12.554 - 12.603: 96.8062% ( 5) 00:07:38.054 12.603 - 12.702: 96.8329% ( 5) 00:07:38.054 12.702 - 12.800: 96.8490% ( 3) 00:07:38.054 12.800 - 12.898: 96.8650% ( 3) 00:07:38.054 12.898 - 12.997: 96.9237% ( 11) 00:07:38.054 12.997 - 13.095: 97.0305% ( 20) 00:07:38.054 13.095 - 13.194: 97.1587% ( 24) 00:07:38.054 13.194 - 13.292: 97.3083% ( 28) 00:07:38.054 13.292 - 13.391: 97.4738% ( 31) 00:07:38.054 13.391 - 13.489: 97.7195% ( 46) 00:07:38.054 13.489 - 13.588: 97.7889% ( 13) 00:07:38.054 13.588 - 13.686: 97.8744% ( 16) 00:07:38.054 13.686 - 13.785: 97.9652% ( 17) 00:07:38.054 13.785 - 13.883: 97.9972% ( 6) 00:07:38.054 13.883 - 13.982: 98.0399% ( 8) 00:07:38.054 13.982 - 14.080: 98.0613% ( 4) 00:07:38.054 14.080 - 14.178: 98.0880% ( 5) 00:07:38.054 14.178 - 14.277: 98.1147% ( 5) 00:07:38.054 14.277 - 14.375: 98.1307% ( 3) 00:07:38.054 14.375 - 14.474: 98.1361% ( 1) 00:07:38.054 14.474 - 14.572: 98.1521% ( 3) 00:07:38.054 14.572 - 14.671: 98.1895% ( 7) 00:07:38.054 14.671 - 14.769: 98.2269% ( 7) 00:07:38.054 14.769 - 14.868: 98.2482% ( 4) 00:07:38.054 14.868 - 14.966: 98.2963% ( 9) 00:07:38.054 14.966 - 15.065: 98.3497% ( 10) 00:07:38.054 15.065 - 15.163: 98.3871% ( 7) 00:07:38.054 15.163 - 15.262: 98.4298% ( 8) 00:07:38.054 15.262 - 15.360: 98.4886% ( 11) 00:07:38.054 15.360 - 15.458: 98.5153% ( 5) 00:07:38.054 15.458 - 15.557: 98.5473% ( 6) 00:07:38.054 15.557 - 15.655: 98.5740% ( 5) 00:07:38.054 15.655 - 15.754: 98.5900% ( 3) 00:07:38.054 15.754 - 15.852: 98.6328% ( 8) 00:07:38.054 15.852 - 15.951: 98.6595% ( 5) 00:07:38.054 15.951 - 16.049: 98.6702% ( 2) 00:07:38.054 16.049 - 16.148: 98.6808% ( 2) 00:07:38.054 16.246 - 16.345: 98.6862% ( 1) 00:07:38.054 16.345 - 16.443: 98.7129% ( 5) 00:07:38.054 16.443 - 16.542: 98.7289% ( 3) 00:07:38.054 16.542 - 16.640: 98.7396% ( 2) 00:07:38.054 16.640 - 16.738: 98.7663% ( 5) 00:07:38.054 16.738 - 16.837: 98.8037% ( 7) 00:07:38.054 16.837 - 16.935: 98.9212% ( 22) 00:07:38.054 16.935 - 17.034: 99.0440% ( 23) 00:07:38.054 17.034 - 17.132: 99.1455% ( 19) 00:07:38.054 17.132 - 17.231: 99.1829% ( 7) 00:07:38.054 17.231 - 17.329: 99.2683% ( 16) 00:07:38.054 17.329 - 17.428: 99.3538% ( 16) 00:07:38.054 17.428 - 17.526: 99.4552% ( 19) 00:07:38.054 17.526 - 17.625: 99.4926% ( 7) 00:07:38.054 17.625 - 17.723: 99.5460% ( 10) 00:07:38.054 17.723 - 17.822: 99.5834% ( 7) 00:07:38.054 17.822 - 17.920: 99.6208% ( 7) 00:07:38.054 17.920 - 18.018: 99.6475% ( 5) 00:07:38.054 18.018 - 18.117: 99.6902% ( 8) 00:07:38.054 18.117 - 18.215: 99.7063% ( 3) 00:07:38.054 18.215 - 18.314: 99.7276% ( 4) 00:07:38.054 18.314 - 18.412: 99.7383% ( 2) 00:07:38.054 18.412 - 18.511: 99.7650% ( 5) 00:07:38.054 18.609 - 18.708: 99.7864% ( 4) 00:07:38.054 18.806 - 18.905: 99.7917% ( 1) 00:07:38.054 19.003 - 19.102: 99.7971% ( 1) 00:07:38.054 19.102 - 19.200: 99.8024% ( 1) 00:07:38.054 19.200 - 19.298: 99.8131% ( 2) 00:07:38.054 19.298 - 19.397: 99.8184% ( 1) 00:07:38.054 19.397 - 19.495: 99.8344% ( 3) 00:07:38.054 19.495 - 19.594: 99.8451% ( 2) 00:07:38.054 19.692 - 19.791: 99.8505% ( 1) 00:07:38.054 19.988 - 20.086: 99.8558% ( 1) 00:07:38.054 20.086 - 20.185: 99.8611% ( 1) 00:07:38.054 20.283 - 20.382: 99.8665% ( 1) 00:07:38.054 20.480 - 20.578: 99.8772% ( 2) 00:07:38.054 20.578 - 20.677: 99.8825% ( 1) 00:07:38.054 20.677 - 20.775: 99.8878% ( 1) 00:07:38.054 20.972 - 21.071: 99.8932% ( 1) 00:07:38.054 21.071 - 21.169: 99.8985% ( 1) 00:07:38.054 21.858 - 21.957: 99.9145% ( 3) 00:07:38.054 22.055 - 22.154: 99.9252% ( 2) 00:07:38.054 22.154 - 22.252: 99.9306% ( 1) 00:07:38.054 22.843 - 22.942: 99.9359% ( 1) 00:07:38.054 23.040 - 23.138: 99.9413% ( 1) 00:07:38.054 25.009 - 25.108: 99.9466% ( 1) 00:07:38.054 25.600 - 25.797: 99.9519% ( 1) 00:07:38.054 25.797 - 25.994: 99.9573% ( 1) 00:07:38.054 28.554 - 28.751: 99.9626% ( 1) 00:07:38.054 31.311 - 31.508: 99.9680% ( 1) 00:07:38.054 34.265 - 34.462: 99.9733% ( 1) 00:07:38.054 36.037 - 36.234: 99.9786% ( 1) 00:07:38.054 55.138 - 55.532: 99.9840% ( 1) 00:07:38.054 55.926 - 56.320: 99.9893% ( 1) 00:07:38.054 61.440 - 61.834: 99.9947% ( 1) 00:07:38.054 104.763 - 105.551: 100.0000% ( 1) 00:07:38.054 00:07:38.054 Complete histogram 00:07:38.054 ================== 00:07:38.054 Range in us Cumulative Count 00:07:38.054 7.286 - 7.335: 0.2029% ( 38) 00:07:38.054 7.335 - 7.385: 3.0122% ( 526) 00:07:38.054 7.385 - 7.434: 16.2465% ( 2478) 00:07:38.054 7.434 - 7.483: 42.0904% ( 4839) 00:07:38.054 7.483 - 7.532: 68.0357% ( 4858) 00:07:38.054 7.532 - 7.582: 83.4704% ( 2890) 00:07:38.054 7.582 - 7.631: 91.0009% ( 1410) 00:07:38.054 7.631 - 7.680: 94.1465% ( 589) 00:07:38.054 7.680 - 7.729: 95.7060% ( 292) 00:07:38.054 7.729 - 7.778: 96.3523% ( 121) 00:07:38.054 7.778 - 7.828: 96.7208% ( 69) 00:07:38.054 7.828 - 7.877: 96.8543% ( 25) 00:07:38.054 7.877 - 7.926: 96.9237% ( 13) 00:07:38.054 7.926 - 7.975: 96.9878% ( 12) 00:07:38.054 7.975 - 8.025: 97.0305% ( 8) 00:07:38.054 8.025 - 8.074: 97.0893% ( 11) 00:07:38.054 8.074 - 8.123: 97.1854% ( 18) 00:07:38.054 8.123 - 8.172: 97.2655% ( 15) 00:07:38.054 8.172 - 8.222: 97.4471% ( 34) 00:07:38.054 8.222 - 8.271: 97.6661% ( 41) 00:07:38.054 8.271 - 8.320: 97.8904% ( 42) 00:07:38.054 8.320 - 8.369: 98.0453% ( 29) 00:07:38.054 8.369 - 8.418: 98.1094% ( 12) 00:07:38.054 8.418 - 8.468: 98.1735% ( 12) 00:07:38.054 8.468 - 8.517: 98.2322% ( 11) 00:07:38.054 8.517 - 8.566: 98.2696% ( 7) 00:07:38.054 8.566 - 8.615: 98.2856% ( 3) 00:07:38.054 8.615 - 8.665: 98.3016% ( 3) 00:07:38.054 8.665 - 8.714: 98.3123% ( 2) 00:07:38.054 8.714 - 8.763: 98.3230% ( 2) 00:07:38.054 8.763 - 8.812: 98.3283% ( 1) 00:07:38.054 8.960 - 9.009: 98.3337% ( 1) 00:07:38.054 9.058 - 9.108: 98.3390% ( 1) 00:07:38.054 9.255 - 9.305: 98.3444% ( 1) 00:07:38.054 9.354 - 9.403: 98.3497% ( 1) 00:07:38.054 9.502 - 9.551: 98.3551% ( 1) 00:07:38.054 9.649 - 9.698: 98.3604% ( 1) 00:07:38.054 9.698 - 9.748: 98.3657% ( 1) 00:07:38.054 9.748 - 9.797: 98.3764% ( 2) 00:07:38.054 9.797 - 9.846: 98.3818% ( 1) 00:07:38.054 9.846 - 9.895: 98.3871% ( 1) 00:07:38.054 9.895 - 9.945: 98.3924% ( 1) 00:07:38.054 9.945 - 9.994: 98.3978% ( 1) 00:07:38.054 9.994 - 10.043: 98.4085% ( 2) 00:07:38.054 10.043 - 10.092: 98.4352% ( 5) 00:07:38.054 10.092 - 10.142: 98.4458% ( 2) 00:07:38.054 10.191 - 10.240: 98.4512% ( 1) 00:07:38.054 10.240 - 10.289: 98.4565% ( 1) 00:07:38.054 10.289 - 10.338: 98.4725% ( 3) 00:07:38.054 10.338 - 10.388: 98.4832% ( 2) 00:07:38.054 10.388 - 10.437: 98.4939% ( 2) 00:07:38.054 10.437 - 10.486: 98.4993% ( 1) 00:07:38.054 10.486 - 10.535: 98.5260% ( 5) 00:07:38.054 10.535 - 10.585: 98.5313% ( 1) 00:07:38.054 10.585 - 10.634: 98.5420% ( 2) 00:07:38.054 10.634 - 10.683: 98.5633% ( 4) 00:07:38.054 10.683 - 10.732: 98.5687% ( 1) 00:07:38.054 10.831 - 10.880: 98.5740% ( 1) 00:07:38.054 10.880 - 10.929: 98.5794% ( 1) 00:07:38.054 10.929 - 10.978: 98.5900% ( 2) 00:07:38.055 10.978 - 11.028: 98.5954% ( 1) 00:07:38.055 11.028 - 11.077: 98.6061% ( 2) 00:07:38.055 11.077 - 11.126: 98.6114% ( 1) 00:07:38.055 11.126 - 11.175: 98.6221% ( 2) 00:07:38.055 11.274 - 11.323: 98.6274% ( 1) 00:07:38.055 11.323 - 11.372: 98.6328% ( 1) 00:07:38.055 11.422 - 11.471: 98.6381% ( 1) 00:07:38.055 11.520 - 11.569: 98.6488% ( 2) 00:07:38.055 11.668 - 11.717: 98.6541% ( 1) 00:07:38.055 11.914 - 11.963: 98.6648% ( 2) 00:07:38.055 12.012 - 12.062: 98.6702% ( 1) 00:07:38.055 12.160 - 12.209: 98.6755% ( 1) 00:07:38.055 12.209 - 12.258: 98.6808% ( 1) 00:07:38.055 12.308 - 12.357: 98.6862% ( 1) 00:07:38.055 12.357 - 12.406: 98.6915% ( 1) 00:07:38.055 12.455 - 12.505: 98.7022% ( 2) 00:07:38.055 12.603 - 12.702: 98.7129% ( 2) 00:07:38.055 12.702 - 12.800: 98.7182% ( 1) 00:07:38.055 12.800 - 12.898: 98.7236% ( 1) 00:07:38.055 12.898 - 12.997: 98.7342% ( 2) 00:07:38.055 12.997 - 13.095: 98.7663% ( 6) 00:07:38.055 13.095 - 13.194: 98.8357% ( 13) 00:07:38.055 13.194 - 13.292: 98.9212% ( 16) 00:07:38.055 13.292 - 13.391: 98.9746% ( 10) 00:07:38.055 13.391 - 13.489: 99.0013% ( 5) 00:07:38.055 13.489 - 13.588: 99.1348% ( 25) 00:07:38.055 13.588 - 13.686: 99.2309% ( 18) 00:07:38.055 13.686 - 13.785: 99.3324% ( 19) 00:07:38.055 13.785 - 13.883: 99.4072% ( 14) 00:07:38.055 13.883 - 13.982: 99.4606% ( 10) 00:07:38.055 13.982 - 14.080: 99.5087% ( 9) 00:07:38.055 14.080 - 14.178: 99.5781% ( 13) 00:07:38.055 14.178 - 14.277: 99.6208% ( 8) 00:07:38.055 14.277 - 14.375: 99.6368% ( 3) 00:07:38.055 14.375 - 14.474: 99.6635% ( 5) 00:07:38.055 14.474 - 14.572: 99.6956% ( 6) 00:07:38.055 14.572 - 14.671: 99.7169% ( 4) 00:07:38.055 14.671 - 14.769: 99.7490% ( 6) 00:07:38.055 14.769 - 14.868: 99.7757% ( 5) 00:07:38.055 14.868 - 14.966: 99.7971% ( 4) 00:07:38.055 14.966 - 15.065: 99.8077% ( 2) 00:07:38.055 15.065 - 15.163: 99.8184% ( 2) 00:07:38.055 15.163 - 15.262: 99.8344% ( 3) 00:07:38.055 15.262 - 15.360: 99.8398% ( 1) 00:07:38.055 15.557 - 15.655: 99.8451% ( 1) 00:07:38.055 15.655 - 15.754: 99.8558% ( 2) 00:07:38.055 15.852 - 15.951: 99.8611% ( 1) 00:07:38.055 16.345 - 16.443: 99.8665% ( 1) 00:07:38.055 16.738 - 16.837: 99.8718% ( 1) 00:07:38.055 16.837 - 16.935: 99.8772% ( 1) 00:07:38.055 17.132 - 17.231: 99.8825% ( 1) 00:07:38.055 17.329 - 17.428: 99.8932% ( 2) 00:07:38.055 17.428 - 17.526: 99.8985% ( 1) 00:07:38.055 17.526 - 17.625: 99.9039% ( 1) 00:07:38.055 17.723 - 17.822: 99.9092% ( 1) 00:07:38.055 17.822 - 17.920: 99.9199% ( 2) 00:07:38.055 19.102 - 19.200: 99.9252% ( 1) 00:07:38.055 20.185 - 20.283: 99.9306% ( 1) 00:07:38.055 20.775 - 20.874: 99.9359% ( 1) 00:07:38.055 21.071 - 21.169: 99.9413% ( 1) 00:07:38.055 21.169 - 21.268: 99.9466% ( 1) 00:07:38.055 21.858 - 21.957: 99.9519% ( 1) 00:07:38.055 22.055 - 22.154: 99.9573% ( 1) 00:07:38.055 22.351 - 22.449: 99.9626% ( 1) 00:07:38.055 23.434 - 23.532: 99.9680% ( 1) 00:07:38.055 23.828 - 23.926: 99.9733% ( 1) 00:07:38.055 26.388 - 26.585: 99.9840% ( 2) 00:07:38.055 33.477 - 33.674: 99.9893% ( 1) 00:07:38.055 34.265 - 34.462: 99.9947% ( 1) 00:07:38.055 46.868 - 47.065: 100.0000% ( 1) 00:07:38.055 00:07:38.313 00:07:38.313 real 0m1.173s 00:07:38.313 user 0m1.059s 00:07:38.313 sys 0m0.070s 00:07:38.313 04:57:07 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.313 04:57:07 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:38.313 ************************************ 00:07:38.313 END TEST nvme_overhead 00:07:38.313 ************************************ 00:07:38.313 04:57:07 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:38.313 04:57:07 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:38.313 04:57:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.313 04:57:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.313 ************************************ 00:07:38.313 START TEST nvme_arbitration 00:07:38.313 ************************************ 00:07:38.313 04:57:07 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:41.593 Initializing NVMe Controllers 00:07:41.593 Attached to 0000:00:13.0 00:07:41.593 Attached to 0000:00:10.0 00:07:41.593 Attached to 0000:00:11.0 00:07:41.593 Attached to 0000:00:12.0 00:07:41.593 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:41.593 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:41.593 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:41.593 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:41.593 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:41.593 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:41.593 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:41.593 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:41.593 Initialization complete. Launching workers. 00:07:41.593 Starting thread on core 1 with urgent priority queue 00:07:41.593 Starting thread on core 2 with urgent priority queue 00:07:41.593 Starting thread on core 3 with urgent priority queue 00:07:41.593 Starting thread on core 0 with urgent priority queue 00:07:41.593 QEMU NVMe Ctrl (12343 ) core 0: 6890.67 IO/s 14.51 secs/100000 ios 00:07:41.593 QEMU NVMe Ctrl (12342 ) core 0: 6890.67 IO/s 14.51 secs/100000 ios 00:07:41.593 QEMU NVMe Ctrl (12340 ) core 1: 6890.67 IO/s 14.51 secs/100000 ios 00:07:41.593 QEMU NVMe Ctrl (12342 ) core 1: 6890.67 IO/s 14.51 secs/100000 ios 00:07:41.593 QEMU NVMe Ctrl (12341 ) core 2: 6528.00 IO/s 15.32 secs/100000 ios 00:07:41.593 QEMU NVMe Ctrl (12342 ) core 3: 6656.00 IO/s 15.02 secs/100000 ios 00:07:41.593 ======================================================== 00:07:41.593 00:07:41.593 00:07:41.593 real 0m3.212s 00:07:41.593 user 0m9.014s 00:07:41.593 sys 0m0.101s 00:07:41.593 04:57:10 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.593 04:57:10 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:41.593 ************************************ 00:07:41.593 END TEST nvme_arbitration 00:07:41.593 ************************************ 00:07:41.593 04:57:10 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:41.593 04:57:10 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:41.593 04:57:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.593 04:57:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.593 ************************************ 00:07:41.593 START TEST nvme_single_aen 00:07:41.593 ************************************ 00:07:41.593 04:57:10 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:41.593 Asynchronous Event Request test 00:07:41.593 Attached to 0000:00:13.0 00:07:41.593 Attached to 0000:00:10.0 00:07:41.593 Attached to 0000:00:11.0 00:07:41.593 Attached to 0000:00:12.0 00:07:41.593 Reset controller to setup AER completions for this process 00:07:41.593 Registering asynchronous event callbacks... 00:07:41.593 Getting orig temperature thresholds of all controllers 00:07:41.593 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.593 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.593 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.593 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.593 Setting all controllers temperature threshold low to trigger AER 00:07:41.593 Waiting for all controllers temperature threshold to be set lower 00:07:41.593 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.593 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:41.593 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.593 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:41.593 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.593 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:41.593 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.593 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:41.593 Waiting for all controllers to trigger AER and reset threshold 00:07:41.593 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.593 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.593 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.593 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.593 Cleaning up... 00:07:41.593 00:07:41.593 real 0m0.221s 00:07:41.593 user 0m0.070s 00:07:41.593 sys 0m0.104s 00:07:41.593 04:57:10 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.593 04:57:10 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:41.593 ************************************ 00:07:41.593 END TEST nvme_single_aen 00:07:41.593 ************************************ 00:07:41.593 04:57:10 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:41.593 04:57:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.593 04:57:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.593 04:57:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.852 ************************************ 00:07:41.852 START TEST nvme_doorbell_aers 00:07:41.852 ************************************ 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:41.852 04:57:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:41.852 [2024-11-28 04:57:11.110076] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:07:51.815 Executing: test_write_invalid_db 00:07:51.815 Waiting for AER completion... 00:07:51.815 Failure: test_write_invalid_db 00:07:51.815 00:07:51.815 Executing: test_invalid_db_write_overflow_sq 00:07:51.815 Waiting for AER completion... 00:07:51.815 Failure: test_invalid_db_write_overflow_sq 00:07:51.815 00:07:51.815 Executing: test_invalid_db_write_overflow_cq 00:07:51.815 Waiting for AER completion... 00:07:51.815 Failure: test_invalid_db_write_overflow_cq 00:07:51.815 00:07:51.815 04:57:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:51.815 04:57:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:52.073 [2024-11-28 04:57:21.140972] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:02.041 Executing: test_write_invalid_db 00:08:02.041 Waiting for AER completion... 00:08:02.041 Failure: test_write_invalid_db 00:08:02.041 00:08:02.041 Executing: test_invalid_db_write_overflow_sq 00:08:02.041 Waiting for AER completion... 00:08:02.041 Failure: test_invalid_db_write_overflow_sq 00:08:02.041 00:08:02.041 Executing: test_invalid_db_write_overflow_cq 00:08:02.041 Waiting for AER completion... 00:08:02.041 Failure: test_invalid_db_write_overflow_cq 00:08:02.041 00:08:02.041 04:57:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:02.041 04:57:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:02.041 [2024-11-28 04:57:31.160522] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:12.006 Executing: test_write_invalid_db 00:08:12.006 Waiting for AER completion... 00:08:12.006 Failure: test_write_invalid_db 00:08:12.006 00:08:12.006 Executing: test_invalid_db_write_overflow_sq 00:08:12.006 Waiting for AER completion... 00:08:12.006 Failure: test_invalid_db_write_overflow_sq 00:08:12.006 00:08:12.006 Executing: test_invalid_db_write_overflow_cq 00:08:12.007 Waiting for AER completion... 00:08:12.007 Failure: test_invalid_db_write_overflow_cq 00:08:12.007 00:08:12.007 04:57:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:12.007 04:57:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:12.007 [2024-11-28 04:57:41.191543] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 Executing: test_write_invalid_db 00:08:21.981 Waiting for AER completion... 00:08:21.981 Failure: test_write_invalid_db 00:08:21.981 00:08:21.981 Executing: test_invalid_db_write_overflow_sq 00:08:21.981 Waiting for AER completion... 00:08:21.981 Failure: test_invalid_db_write_overflow_sq 00:08:21.981 00:08:21.981 Executing: test_invalid_db_write_overflow_cq 00:08:21.981 Waiting for AER completion... 00:08:21.981 Failure: test_invalid_db_write_overflow_cq 00:08:21.981 00:08:21.981 00:08:21.981 real 0m40.166s 00:08:21.981 user 0m34.169s 00:08:21.981 sys 0m5.643s 00:08:21.981 04:57:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.981 04:57:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:21.981 ************************************ 00:08:21.981 END TEST nvme_doorbell_aers 00:08:21.981 ************************************ 00:08:21.981 04:57:51 nvme -- nvme/nvme.sh@97 -- # uname 00:08:21.981 04:57:51 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:21.981 04:57:51 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:21.981 04:57:51 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:21.981 04:57:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.981 04:57:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.981 ************************************ 00:08:21.981 START TEST nvme_multi_aen 00:08:21.981 ************************************ 00:08:21.981 04:57:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:21.981 [2024-11-28 04:57:51.221979] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 [2024-11-28 04:57:51.222037] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 [2024-11-28 04:57:51.222049] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 [2024-11-28 04:57:51.223074] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 [2024-11-28 04:57:51.223097] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 [2024-11-28 04:57:51.223106] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 [2024-11-28 04:57:51.224242] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 [2024-11-28 04:57:51.224264] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 [2024-11-28 04:57:51.224271] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 [2024-11-28 04:57:51.225172] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 [2024-11-28 04:57:51.225201] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 [2024-11-28 04:57:51.225208] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74690) is not found. Dropping the request. 00:08:21.981 Child process pid: 75216 00:08:22.240 [Child] Asynchronous Event Request test 00:08:22.240 [Child] Attached to 0000:00:13.0 00:08:22.240 [Child] Attached to 0000:00:10.0 00:08:22.240 [Child] Attached to 0000:00:11.0 00:08:22.240 [Child] Attached to 0000:00:12.0 00:08:22.240 [Child] Registering asynchronous event callbacks... 00:08:22.240 [Child] Getting orig temperature thresholds of all controllers 00:08:22.240 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.240 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.240 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.240 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.240 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:22.240 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.240 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.240 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.240 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.240 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.240 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.240 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.240 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.240 [Child] Cleaning up... 00:08:22.240 Asynchronous Event Request test 00:08:22.240 Attached to 0000:00:13.0 00:08:22.240 Attached to 0000:00:10.0 00:08:22.240 Attached to 0000:00:11.0 00:08:22.240 Attached to 0000:00:12.0 00:08:22.240 Reset controller to setup AER completions for this process 00:08:22.240 Registering asynchronous event callbacks... 00:08:22.240 Getting orig temperature thresholds of all controllers 00:08:22.240 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.240 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.240 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.240 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.240 Setting all controllers temperature threshold low to trigger AER 00:08:22.240 Waiting for all controllers temperature threshold to be set lower 00:08:22.240 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.240 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:22.240 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.240 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:22.240 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.240 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:22.240 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.240 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:22.240 Waiting for all controllers to trigger AER and reset threshold 00:08:22.240 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.240 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.240 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.240 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.240 Cleaning up... 00:08:22.240 00:08:22.240 real 0m0.374s 00:08:22.240 user 0m0.123s 00:08:22.241 sys 0m0.143s 00:08:22.241 04:57:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.241 04:57:51 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:22.241 ************************************ 00:08:22.241 END TEST nvme_multi_aen 00:08:22.241 ************************************ 00:08:22.241 04:57:51 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:22.241 04:57:51 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:22.241 04:57:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.241 04:57:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.241 ************************************ 00:08:22.241 START TEST nvme_startup 00:08:22.241 ************************************ 00:08:22.241 04:57:51 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:22.499 Initializing NVMe Controllers 00:08:22.499 Attached to 0000:00:13.0 00:08:22.499 Attached to 0000:00:10.0 00:08:22.499 Attached to 0000:00:11.0 00:08:22.499 Attached to 0000:00:12.0 00:08:22.499 Initialization complete. 00:08:22.499 Time used:124994.078 (us). 00:08:22.499 00:08:22.499 real 0m0.174s 00:08:22.499 user 0m0.055s 00:08:22.499 sys 0m0.077s 00:08:22.499 04:57:51 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.499 ************************************ 00:08:22.499 04:57:51 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:22.499 END TEST nvme_startup 00:08:22.499 ************************************ 00:08:22.499 04:57:51 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:22.499 04:57:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.499 04:57:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.499 04:57:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.499 ************************************ 00:08:22.499 START TEST nvme_multi_secondary 00:08:22.499 ************************************ 00:08:22.499 04:57:51 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:22.499 04:57:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75267 00:08:22.499 04:57:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75268 00:08:22.499 04:57:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:22.499 04:57:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:22.499 04:57:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:25.791 Initializing NVMe Controllers 00:08:25.791 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:25.791 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:25.791 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:25.791 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:25.791 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:25.791 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:25.791 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:25.791 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:25.791 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:25.791 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:25.791 Initialization complete. Launching workers. 00:08:25.791 ======================================================== 00:08:25.791 Latency(us) 00:08:25.791 Device Information : IOPS MiB/s Average min max 00:08:25.791 PCIE (0000:00:13.0) NSID 1 from core 2: 2343.01 9.15 6826.68 798.22 28630.63 00:08:25.791 PCIE (0000:00:10.0) NSID 1 from core 2: 2343.01 9.15 6827.32 789.47 33427.99 00:08:25.791 PCIE (0000:00:11.0) NSID 1 from core 2: 2343.01 9.15 6842.60 788.27 28103.94 00:08:25.791 PCIE (0000:00:12.0) NSID 1 from core 2: 2343.01 9.15 6843.07 788.35 28855.72 00:08:25.791 PCIE (0000:00:12.0) NSID 2 from core 2: 2343.01 9.15 6843.22 801.87 30750.00 00:08:25.791 PCIE (0000:00:12.0) NSID 3 from core 2: 2343.01 9.15 6845.95 804.70 28574.07 00:08:25.791 ======================================================== 00:08:25.791 Total : 14058.06 54.91 6838.14 788.27 33427.99 00:08:25.791 00:08:25.791 04:57:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75267 00:08:25.791 Initializing NVMe Controllers 00:08:25.791 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:25.791 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:25.791 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:25.791 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:25.791 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:25.791 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:25.791 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:25.791 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:25.791 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:25.791 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:25.791 Initialization complete. Launching workers. 00:08:25.791 ======================================================== 00:08:25.791 Latency(us) 00:08:25.791 Device Information : IOPS MiB/s Average min max 00:08:25.791 PCIE (0000:00:13.0) NSID 1 from core 1: 5511.32 21.53 2902.55 929.16 14949.39 00:08:25.791 PCIE (0000:00:10.0) NSID 1 from core 1: 5511.32 21.53 2901.57 915.04 14530.45 00:08:25.791 PCIE (0000:00:11.0) NSID 1 from core 1: 5511.32 21.53 2902.62 922.66 12877.33 00:08:25.791 PCIE (0000:00:12.0) NSID 1 from core 1: 5511.32 21.53 2902.74 941.04 13843.95 00:08:25.791 PCIE (0000:00:12.0) NSID 2 from core 1: 5511.32 21.53 2902.79 927.43 16838.52 00:08:25.791 PCIE (0000:00:12.0) NSID 3 from core 1: 5511.32 21.53 2902.78 919.36 16141.69 00:08:25.791 ======================================================== 00:08:25.791 Total : 33067.95 129.17 2902.51 915.04 16838.52 00:08:25.791 00:08:27.704 Initializing NVMe Controllers 00:08:27.704 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:27.704 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:27.704 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:27.704 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:27.704 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:27.704 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:27.704 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:27.704 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:27.704 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:27.704 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:27.704 Initialization complete. Launching workers. 00:08:27.704 ======================================================== 00:08:27.704 Latency(us) 00:08:27.704 Device Information : IOPS MiB/s Average min max 00:08:27.704 PCIE (0000:00:13.0) NSID 1 from core 0: 6706.65 26.20 2385.20 730.04 11446.00 00:08:27.704 PCIE (0000:00:10.0) NSID 1 from core 0: 6706.65 26.20 2384.27 713.47 12042.67 00:08:27.704 PCIE (0000:00:11.0) NSID 1 from core 0: 6706.65 26.20 2385.28 702.21 12441.62 00:08:27.704 PCIE (0000:00:12.0) NSID 1 from core 0: 6706.65 26.20 2385.24 708.07 13790.97 00:08:27.704 PCIE (0000:00:12.0) NSID 2 from core 0: 6706.65 26.20 2385.21 724.94 11276.16 00:08:27.704 PCIE (0000:00:12.0) NSID 3 from core 0: 6706.65 26.20 2385.17 653.44 11180.39 00:08:27.704 ======================================================== 00:08:27.704 Total : 40239.88 157.19 2385.06 653.44 13790.97 00:08:27.704 00:08:27.704 04:57:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75268 00:08:27.704 04:57:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75337 00:08:27.704 04:57:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:27.704 04:57:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75338 00:08:27.704 04:57:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:27.704 04:57:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:31.024 Initializing NVMe Controllers 00:08:31.024 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.024 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.024 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.024 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.024 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:31.024 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:31.024 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:31.024 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:31.024 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:31.024 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:31.024 Initialization complete. Launching workers. 00:08:31.024 ======================================================== 00:08:31.024 Latency(us) 00:08:31.024 Device Information : IOPS MiB/s Average min max 00:08:31.024 PCIE (0000:00:13.0) NSID 1 from core 1: 5784.78 22.60 2765.41 763.03 14492.91 00:08:31.024 PCIE (0000:00:10.0) NSID 1 from core 1: 5784.78 22.60 2764.25 740.48 14827.40 00:08:31.024 PCIE (0000:00:11.0) NSID 1 from core 1: 5784.78 22.60 2765.14 742.67 13662.17 00:08:31.024 PCIE (0000:00:12.0) NSID 1 from core 1: 5784.78 22.60 2765.06 743.82 14238.12 00:08:31.024 PCIE (0000:00:12.0) NSID 2 from core 1: 5784.78 22.60 2765.36 742.87 14456.62 00:08:31.024 PCIE (0000:00:12.0) NSID 3 from core 1: 5784.78 22.60 2765.30 756.93 14269.53 00:08:31.024 ======================================================== 00:08:31.024 Total : 34708.68 135.58 2765.09 740.48 14827.40 00:08:31.024 00:08:31.024 Initializing NVMe Controllers 00:08:31.024 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.024 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.024 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.024 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.024 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:31.024 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:31.024 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:31.024 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:31.024 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:31.024 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:31.024 Initialization complete. Launching workers. 00:08:31.024 ======================================================== 00:08:31.024 Latency(us) 00:08:31.024 Device Information : IOPS MiB/s Average min max 00:08:31.024 PCIE (0000:00:13.0) NSID 1 from core 0: 5625.17 21.97 2843.66 900.47 12105.47 00:08:31.024 PCIE (0000:00:10.0) NSID 1 from core 0: 5625.17 21.97 2842.56 886.04 11181.40 00:08:31.024 PCIE (0000:00:11.0) NSID 1 from core 0: 5625.17 21.97 2843.35 900.99 11877.90 00:08:31.024 PCIE (0000:00:12.0) NSID 1 from core 0: 5625.17 21.97 2843.04 894.07 11542.37 00:08:31.024 PCIE (0000:00:12.0) NSID 2 from core 0: 5625.17 21.97 2842.64 907.26 11800.37 00:08:31.024 PCIE (0000:00:12.0) NSID 3 from core 0: 5625.17 21.97 2842.28 787.79 11713.14 00:08:31.024 ======================================================== 00:08:31.024 Total : 33751.05 131.84 2842.92 787.79 12105.47 00:08:31.024 00:08:33.571 Initializing NVMe Controllers 00:08:33.571 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.571 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.571 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.571 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.571 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:33.571 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:33.571 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:33.571 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:33.571 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:33.571 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:33.571 Initialization complete. Launching workers. 00:08:33.571 ======================================================== 00:08:33.571 Latency(us) 00:08:33.571 Device Information : IOPS MiB/s Average min max 00:08:33.571 PCIE (0000:00:13.0) NSID 1 from core 2: 2423.91 9.47 6600.07 935.59 28323.82 00:08:33.571 PCIE (0000:00:10.0) NSID 1 from core 2: 2423.91 9.47 6599.01 914.30 25783.96 00:08:33.571 PCIE (0000:00:11.0) NSID 1 from core 2: 2423.91 9.47 6600.09 936.51 26315.77 00:08:33.571 PCIE (0000:00:12.0) NSID 1 from core 2: 2423.91 9.47 6599.58 937.74 28167.02 00:08:33.571 PCIE (0000:00:12.0) NSID 2 from core 2: 2423.91 9.47 6600.08 935.33 28086.79 00:08:33.571 PCIE (0000:00:12.0) NSID 3 from core 2: 2427.11 9.48 6592.60 938.92 31733.17 00:08:33.571 ======================================================== 00:08:33.571 Total : 14546.65 56.82 6598.57 914.30 31733.17 00:08:33.571 00:08:33.571 04:58:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75337 00:08:33.571 04:58:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75338 00:08:33.571 00:08:33.571 real 0m10.707s 00:08:33.571 user 0m18.198s 00:08:33.571 sys 0m0.577s 00:08:33.571 04:58:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.571 ************************************ 00:08:33.571 END TEST nvme_multi_secondary 00:08:33.571 ************************************ 00:08:33.571 04:58:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:33.571 04:58:02 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:33.571 04:58:02 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:33.571 04:58:02 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74304 ]] 00:08:33.571 04:58:02 nvme -- common/autotest_common.sh@1094 -- # kill 74304 00:08:33.571 04:58:02 nvme -- common/autotest_common.sh@1095 -- # wait 74304 00:08:33.571 [2024-11-28 04:58:02.472317] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.571 [2024-11-28 04:58:02.472394] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.571 [2024-11-28 04:58:02.472411] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.571 [2024-11-28 04:58:02.472429] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 [2024-11-28 04:58:02.473577] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 [2024-11-28 04:58:02.473647] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 [2024-11-28 04:58:02.473681] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 [2024-11-28 04:58:02.473699] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 [2024-11-28 04:58:02.475218] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 [2024-11-28 04:58:02.475282] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 [2024-11-28 04:58:02.475299] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 [2024-11-28 04:58:02.475318] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 [2024-11-28 04:58:02.476381] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 [2024-11-28 04:58:02.476472] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 [2024-11-28 04:58:02.476494] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 [2024-11-28 04:58:02.476513] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75215) is not found. Dropping the request. 00:08:33.572 04:58:02 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:33.572 04:58:02 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:33.572 04:58:02 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:33.572 04:58:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.572 04:58:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.572 04:58:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.572 ************************************ 00:08:33.572 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:33.572 ************************************ 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:33.572 * Looking for test storage... 00:08:33.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:33.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.572 --rc genhtml_branch_coverage=1 00:08:33.572 --rc genhtml_function_coverage=1 00:08:33.572 --rc genhtml_legend=1 00:08:33.572 --rc geninfo_all_blocks=1 00:08:33.572 --rc geninfo_unexecuted_blocks=1 00:08:33.572 00:08:33.572 ' 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:33.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.572 --rc genhtml_branch_coverage=1 00:08:33.572 --rc genhtml_function_coverage=1 00:08:33.572 --rc genhtml_legend=1 00:08:33.572 --rc geninfo_all_blocks=1 00:08:33.572 --rc geninfo_unexecuted_blocks=1 00:08:33.572 00:08:33.572 ' 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:33.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.572 --rc genhtml_branch_coverage=1 00:08:33.572 --rc genhtml_function_coverage=1 00:08:33.572 --rc genhtml_legend=1 00:08:33.572 --rc geninfo_all_blocks=1 00:08:33.572 --rc geninfo_unexecuted_blocks=1 00:08:33.572 00:08:33.572 ' 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:33.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.572 --rc genhtml_branch_coverage=1 00:08:33.572 --rc genhtml_function_coverage=1 00:08:33.572 --rc genhtml_legend=1 00:08:33.572 --rc geninfo_all_blocks=1 00:08:33.572 --rc geninfo_unexecuted_blocks=1 00:08:33.572 00:08:33.572 ' 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:33.572 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:33.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75499 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75499 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75499 ']' 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.573 04:58:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:33.573 [2024-11-28 04:58:02.837297] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:33.573 [2024-11-28 04:58:02.837416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75499 ] 00:08:33.831 [2024-11-28 04:58:02.991599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.831 [2024-11-28 04:58:03.025292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.831 [2024-11-28 04:58:03.025411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.831 [2024-11-28 04:58:03.025769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.831 [2024-11-28 04:58:03.025955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:34.774 nvme0n1 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_OK1yg.txt 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:34.774 true 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732769883 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75527 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:34.774 04:58:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:36.686 [2024-11-28 04:58:05.855819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:36.686 [2024-11-28 04:58:05.856279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:36.686 [2024-11-28 04:58:05.856311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:36.686 [2024-11-28 04:58:05.856327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.686 [2024-11-28 04:58:05.860226] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:36.686 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75527 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75527 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75527 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.686 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_OK1yg.txt 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_OK1yg.txt 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75499 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75499 ']' 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75499 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75499 00:08:36.687 killing process with pid 75499 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75499' 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75499 00:08:36.687 04:58:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75499 00:08:37.258 04:58:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:37.258 04:58:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:37.258 00:08:37.258 real 0m3.687s 00:08:37.258 user 0m13.200s 00:08:37.258 sys 0m0.512s 00:08:37.258 ************************************ 00:08:37.258 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:37.258 ************************************ 00:08:37.258 04:58:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.258 04:58:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:37.258 04:58:06 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:37.258 04:58:06 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:37.258 04:58:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.258 04:58:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.258 04:58:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.258 ************************************ 00:08:37.258 START TEST nvme_fio 00:08:37.258 ************************************ 00:08:37.258 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:37.258 04:58:06 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:37.258 04:58:06 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:37.258 04:58:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:37.258 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:37.258 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:37.258 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:37.259 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.259 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:37.259 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:37.259 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:37.259 04:58:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:37.259 04:58:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:37.259 04:58:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:37.259 04:58:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:37.259 04:58:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:37.519 04:58:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:37.519 04:58:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:37.519 04:58:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:37.519 04:58:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:37.519 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:37.519 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:37.519 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:37.519 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:37.519 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:37.519 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:37.519 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:37.519 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:37.519 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:37.519 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:37.519 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:37.778 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:37.778 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:37.778 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:37.778 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:37.778 04:58:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:37.778 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:37.778 fio-3.35 00:08:37.778 Starting 1 thread 00:08:43.073 00:08:43.073 test: (groupid=0, jobs=1): err= 0: pid=75651: Thu Nov 28 04:58:11 2024 00:08:43.073 read: IOPS=15.5k, BW=60.4MiB/s (63.3MB/s)(121MiB/2001msec) 00:08:43.073 slat (usec): min=4, max=319, avg= 6.59, stdev= 4.11 00:08:43.073 clat (usec): min=485, max=12106, avg=4111.75, stdev=1481.61 00:08:43.073 lat (usec): min=490, max=12143, avg=4118.34, stdev=1482.88 00:08:43.073 clat percentiles (usec): 00:08:43.073 | 1.00th=[ 2311], 5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 2999], 00:08:43.073 | 30.00th=[ 3163], 40.00th=[ 3294], 50.00th=[ 3490], 60.00th=[ 3818], 00:08:43.073 | 70.00th=[ 4555], 80.00th=[ 5342], 90.00th=[ 6325], 95.00th=[ 7046], 00:08:43.073 | 99.00th=[ 8717], 99.50th=[ 9372], 99.90th=[10945], 99.95th=[11207], 00:08:43.073 | 99.99th=[11863] 00:08:43.073 bw ( KiB/s): min=58672, max=67464, per=100.00%, avg=63186.67, stdev=4400.80, samples=3 00:08:43.073 iops : min=14668, max=16866, avg=15796.67, stdev=1100.20, samples=3 00:08:43.073 write: IOPS=15.5k, BW=60.4MiB/s (63.3MB/s)(121MiB/2001msec); 0 zone resets 00:08:43.073 slat (nsec): min=4886, max=88730, avg=6793.16, stdev=3661.56 00:08:43.073 clat (usec): min=503, max=11886, avg=4143.92, stdev=1476.42 00:08:43.073 lat (usec): min=508, max=11910, avg=4150.71, stdev=1477.66 00:08:43.073 clat percentiles (usec): 00:08:43.073 | 1.00th=[ 2343], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 3032], 00:08:43.073 | 30.00th=[ 3195], 40.00th=[ 3326], 50.00th=[ 3523], 60.00th=[ 3851], 00:08:43.073 | 70.00th=[ 4621], 80.00th=[ 5407], 90.00th=[ 6325], 95.00th=[ 7046], 00:08:43.073 | 99.00th=[ 8717], 99.50th=[ 9372], 99.90th=[10814], 99.95th=[11076], 00:08:43.073 | 99.99th=[11600] 00:08:43.073 bw ( KiB/s): min=58184, max=67552, per=100.00%, avg=62893.33, stdev=4684.21, samples=3 00:08:43.073 iops : min=14546, max=16888, avg=15722.67, stdev=1171.04, samples=3 00:08:43.073 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:08:43.073 lat (msec) : 2=0.48%, 4=62.15%, 10=37.07%, 20=0.25% 00:08:43.073 cpu : usr=98.20%, sys=0.45%, ctx=22, majf=0, minf=625 00:08:43.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:43.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:43.073 issued rwts: total=30924,30942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:43.073 00:08:43.073 Run status group 0 (all jobs): 00:08:43.073 READ: bw=60.4MiB/s (63.3MB/s), 60.4MiB/s-60.4MiB/s (63.3MB/s-63.3MB/s), io=121MiB (127MB), run=2001-2001msec 00:08:43.073 WRITE: bw=60.4MiB/s (63.3MB/s), 60.4MiB/s-60.4MiB/s (63.3MB/s-63.3MB/s), io=121MiB (127MB), run=2001-2001msec 00:08:43.073 ----------------------------------------------------- 00:08:43.073 Suppressions used: 00:08:43.073 count bytes template 00:08:43.073 1 32 /usr/src/fio/parse.c 00:08:43.073 1 8 libtcmalloc_minimal.so 00:08:43.073 ----------------------------------------------------- 00:08:43.073 00:08:43.073 04:58:12 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:43.073 04:58:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:43.073 04:58:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:43.073 04:58:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:43.336 04:58:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:43.336 04:58:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:43.596 04:58:12 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:43.596 04:58:12 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:43.596 04:58:12 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:43.596 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:43.596 fio-3.35 00:08:43.596 Starting 1 thread 00:08:50.235 00:08:50.235 test: (groupid=0, jobs=1): err= 0: pid=75707: Thu Nov 28 04:58:18 2024 00:08:50.235 read: IOPS=19.0k, BW=74.2MiB/s (77.8MB/s)(148MiB/2001msec) 00:08:50.235 slat (nsec): min=4259, max=86175, avg=5568.24, stdev=2825.33 00:08:50.235 clat (usec): min=989, max=10776, avg=3346.17, stdev=1170.00 00:08:50.235 lat (usec): min=993, max=10826, avg=3351.74, stdev=1171.15 00:08:50.235 clat percentiles (usec): 00:08:50.235 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:08:50.235 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 3064], 00:08:50.235 | 70.00th=[ 3392], 80.00th=[ 4080], 90.00th=[ 5145], 95.00th=[ 5997], 00:08:50.235 | 99.00th=[ 7046], 99.50th=[ 7439], 99.90th=[ 8979], 99.95th=[ 9634], 00:08:50.235 | 99.99th=[10683] 00:08:50.235 bw ( KiB/s): min=75344, max=78360, per=100.00%, avg=76432.00, stdev=1674.30, samples=3 00:08:50.235 iops : min=18836, max=19590, avg=19108.67, stdev=418.07, samples=3 00:08:50.235 write: IOPS=19.0k, BW=74.2MiB/s (77.8MB/s)(148MiB/2001msec); 0 zone resets 00:08:50.235 slat (nsec): min=4334, max=78586, avg=5717.94, stdev=2859.30 00:08:50.235 clat (usec): min=979, max=10682, avg=3372.92, stdev=1161.86 00:08:50.235 lat (usec): min=984, max=10695, avg=3378.64, stdev=1163.05 00:08:50.235 clat percentiles (usec): 00:08:50.235 | 1.00th=[ 2073], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2573], 00:08:50.235 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 3097], 00:08:50.235 | 70.00th=[ 3425], 80.00th=[ 4113], 90.00th=[ 5145], 95.00th=[ 5997], 00:08:50.235 | 99.00th=[ 7046], 99.50th=[ 7439], 99.90th=[ 8979], 99.95th=[ 9765], 00:08:50.235 | 99.99th=[10552] 00:08:50.235 bw ( KiB/s): min=75584, max=78208, per=100.00%, avg=76584.00, stdev=1418.93, samples=3 00:08:50.235 iops : min=18896, max=19552, avg=19146.00, stdev=354.73, samples=3 00:08:50.235 lat (usec) : 1000=0.01% 00:08:50.235 lat (msec) : 2=0.79%, 4=77.95%, 10=21.23%, 20=0.02% 00:08:50.235 cpu : usr=98.95%, sys=0.10%, ctx=2, majf=0, minf=624 00:08:50.235 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:50.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.235 issued rwts: total=38005,38012,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.235 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.235 00:08:50.235 Run status group 0 (all jobs): 00:08:50.235 READ: bw=74.2MiB/s (77.8MB/s), 74.2MiB/s-74.2MiB/s (77.8MB/s-77.8MB/s), io=148MiB (156MB), run=2001-2001msec 00:08:50.235 WRITE: bw=74.2MiB/s (77.8MB/s), 74.2MiB/s-74.2MiB/s (77.8MB/s-77.8MB/s), io=148MiB (156MB), run=2001-2001msec 00:08:50.235 ----------------------------------------------------- 00:08:50.235 Suppressions used: 00:08:50.235 count bytes template 00:08:50.235 1 32 /usr/src/fio/parse.c 00:08:50.235 1 8 libtcmalloc_minimal.so 00:08:50.235 ----------------------------------------------------- 00:08:50.235 00:08:50.235 04:58:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:50.235 04:58:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:50.235 04:58:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:50.235 04:58:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:50.235 04:58:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:50.235 04:58:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:50.235 04:58:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:50.235 04:58:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:50.235 04:58:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:50.496 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:50.496 fio-3.35 00:08:50.496 Starting 1 thread 00:08:57.084 00:08:57.084 test: (groupid=0, jobs=1): err= 0: pid=75769: Thu Nov 28 04:58:25 2024 00:08:57.084 read: IOPS=16.9k, BW=66.0MiB/s (69.2MB/s)(132MiB/2001msec) 00:08:57.084 slat (nsec): min=4831, max=70301, avg=6335.54, stdev=3257.42 00:08:57.084 clat (usec): min=291, max=10421, avg=3747.26, stdev=1181.40 00:08:57.084 lat (usec): min=296, max=10489, avg=3753.60, stdev=1182.57 00:08:57.084 clat percentiles (usec): 00:08:57.084 | 1.00th=[ 2343], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2900], 00:08:57.084 | 30.00th=[ 3032], 40.00th=[ 3130], 50.00th=[ 3261], 60.00th=[ 3458], 00:08:57.084 | 70.00th=[ 3851], 80.00th=[ 4686], 90.00th=[ 5604], 95.00th=[ 6325], 00:08:57.084 | 99.00th=[ 7373], 99.50th=[ 7832], 99.90th=[ 8586], 99.95th=[ 9503], 00:08:57.084 | 99.99th=[10290] 00:08:57.084 bw ( KiB/s): min=65104, max=70528, per=100.00%, avg=67877.33, stdev=2714.08, samples=3 00:08:57.084 iops : min=16276, max=17632, avg=16969.33, stdev=678.52, samples=3 00:08:57.084 write: IOPS=17.0k, BW=66.2MiB/s (69.4MB/s)(133MiB/2001msec); 0 zone resets 00:08:57.084 slat (nsec): min=4923, max=97397, avg=6497.78, stdev=3318.11 00:08:57.084 clat (usec): min=209, max=10338, avg=3789.64, stdev=1195.98 00:08:57.084 lat (usec): min=215, max=10346, avg=3796.14, stdev=1197.17 00:08:57.084 clat percentiles (usec): 00:08:57.084 | 1.00th=[ 2343], 5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 2933], 00:08:57.084 | 30.00th=[ 3064], 40.00th=[ 3163], 50.00th=[ 3294], 60.00th=[ 3490], 00:08:57.084 | 70.00th=[ 3949], 80.00th=[ 4752], 90.00th=[ 5669], 95.00th=[ 6325], 00:08:57.084 | 99.00th=[ 7439], 99.50th=[ 7898], 99.90th=[ 8848], 99.95th=[ 9634], 00:08:57.084 | 99.99th=[ 9896] 00:08:57.084 bw ( KiB/s): min=65576, max=70032, per=99.87%, avg=67725.33, stdev=2232.16, samples=3 00:08:57.084 iops : min=16394, max=17508, avg=16931.33, stdev=558.04, samples=3 00:08:57.084 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:08:57.084 lat (msec) : 2=0.36%, 4=70.93%, 10=28.65%, 20=0.01% 00:08:57.084 cpu : usr=98.70%, sys=0.10%, ctx=4, majf=0, minf=625 00:08:57.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:57.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.084 issued rwts: total=33826,33922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.084 00:08:57.084 Run status group 0 (all jobs): 00:08:57.084 READ: bw=66.0MiB/s (69.2MB/s), 66.0MiB/s-66.0MiB/s (69.2MB/s-69.2MB/s), io=132MiB (139MB), run=2001-2001msec 00:08:57.084 WRITE: bw=66.2MiB/s (69.4MB/s), 66.2MiB/s-66.2MiB/s (69.4MB/s-69.4MB/s), io=133MiB (139MB), run=2001-2001msec 00:08:57.084 ----------------------------------------------------- 00:08:57.084 Suppressions used: 00:08:57.084 count bytes template 00:08:57.084 1 32 /usr/src/fio/parse.c 00:08:57.084 1 8 libtcmalloc_minimal.so 00:08:57.084 ----------------------------------------------------- 00:08:57.084 00:08:57.084 04:58:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:57.084 04:58:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:57.084 04:58:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:57.084 04:58:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:57.084 04:58:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:57.084 04:58:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:57.084 04:58:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:57.084 04:58:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:57.084 04:58:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:57.084 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:57.084 fio-3.35 00:08:57.084 Starting 1 thread 00:09:02.369 00:09:02.369 test: (groupid=0, jobs=1): err= 0: pid=75824: Thu Nov 28 04:58:30 2024 00:09:02.369 read: IOPS=18.8k, BW=73.5MiB/s (77.0MB/s)(147MiB/2001msec) 00:09:02.369 slat (nsec): min=3398, max=73374, avg=5708.10, stdev=2930.38 00:09:02.369 clat (usec): min=221, max=13130, avg=3379.22, stdev=1244.85 00:09:02.369 lat (usec): min=225, max=13192, avg=3384.93, stdev=1246.14 00:09:02.369 clat percentiles (usec): 00:09:02.369 | 1.00th=[ 1958], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:02.369 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2868], 60.00th=[ 3130], 00:09:02.369 | 70.00th=[ 3621], 80.00th=[ 4424], 90.00th=[ 5211], 95.00th=[ 5866], 00:09:02.369 | 99.00th=[ 7373], 99.50th=[ 8029], 99.90th=[10028], 99.95th=[12125], 00:09:02.369 | 99.99th=[13042] 00:09:02.369 bw ( KiB/s): min=70392, max=75968, per=97.06%, avg=73007.33, stdev=2803.99, samples=3 00:09:02.369 iops : min=17598, max=18992, avg=18251.67, stdev=701.03, samples=3 00:09:02.369 write: IOPS=18.8k, BW=73.5MiB/s (77.0MB/s)(147MiB/2001msec); 0 zone resets 00:09:02.369 slat (nsec): min=3510, max=73330, avg=5886.71, stdev=3107.97 00:09:02.369 clat (usec): min=237, max=13056, avg=3404.00, stdev=1247.06 00:09:02.369 lat (usec): min=242, max=13070, avg=3409.89, stdev=1248.40 00:09:02.369 clat percentiles (usec): 00:09:02.369 | 1.00th=[ 1975], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:02.369 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2900], 60.00th=[ 3163], 00:09:02.369 | 70.00th=[ 3654], 80.00th=[ 4424], 90.00th=[ 5276], 95.00th=[ 5866], 00:09:02.369 | 99.00th=[ 7373], 99.50th=[ 8029], 99.90th=[10028], 99.95th=[12256], 00:09:02.369 | 99.99th=[12911] 00:09:02.369 bw ( KiB/s): min=70504, max=75368, per=96.87%, avg=72884.67, stdev=2433.62, samples=3 00:09:02.369 iops : min=17626, max=18842, avg=18221.00, stdev=608.42, samples=3 00:09:02.369 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:09:02.369 lat (msec) : 2=1.10%, 4=73.34%, 10=25.40%, 20=0.10% 00:09:02.369 cpu : usr=98.95%, sys=0.00%, ctx=16, majf=0, minf=624 00:09:02.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:02.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.369 issued rwts: total=37628,37638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.369 00:09:02.369 Run status group 0 (all jobs): 00:09:02.369 READ: bw=73.5MiB/s (77.0MB/s), 73.5MiB/s-73.5MiB/s (77.0MB/s-77.0MB/s), io=147MiB (154MB), run=2001-2001msec 00:09:02.369 WRITE: bw=73.5MiB/s (77.0MB/s), 73.5MiB/s-73.5MiB/s (77.0MB/s-77.0MB/s), io=147MiB (154MB), run=2001-2001msec 00:09:02.369 ----------------------------------------------------- 00:09:02.369 Suppressions used: 00:09:02.369 count bytes template 00:09:02.369 1 32 /usr/src/fio/parse.c 00:09:02.369 1 8 libtcmalloc_minimal.so 00:09:02.369 ----------------------------------------------------- 00:09:02.369 00:09:02.369 ************************************ 00:09:02.369 END TEST nvme_fio 00:09:02.369 ************************************ 00:09:02.369 04:58:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:02.369 04:58:30 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:02.369 00:09:02.369 real 0m24.587s 00:09:02.369 user 0m17.669s 00:09:02.369 sys 0m10.884s 00:09:02.369 04:58:30 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.369 04:58:30 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:02.369 ************************************ 00:09:02.369 END TEST nvme 00:09:02.369 ************************************ 00:09:02.369 00:09:02.369 real 1m31.820s 00:09:02.369 user 3m32.364s 00:09:02.369 sys 0m20.777s 00:09:02.369 04:58:30 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.369 04:58:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.369 04:58:30 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:02.369 04:58:30 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:02.369 04:58:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.369 04:58:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.369 04:58:30 -- common/autotest_common.sh@10 -- # set +x 00:09:02.369 ************************************ 00:09:02.369 START TEST nvme_scc 00:09:02.369 ************************************ 00:09:02.369 04:58:30 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:02.369 * Looking for test storage... 00:09:02.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:02.369 04:58:31 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:02.369 04:58:31 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:02.369 04:58:31 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:02.369 04:58:31 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.369 04:58:31 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:02.369 04:58:31 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.369 04:58:31 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.369 --rc genhtml_branch_coverage=1 00:09:02.369 --rc genhtml_function_coverage=1 00:09:02.369 --rc genhtml_legend=1 00:09:02.369 --rc geninfo_all_blocks=1 00:09:02.369 --rc geninfo_unexecuted_blocks=1 00:09:02.369 00:09:02.369 ' 00:09:02.369 04:58:31 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.369 --rc genhtml_branch_coverage=1 00:09:02.369 --rc genhtml_function_coverage=1 00:09:02.369 --rc genhtml_legend=1 00:09:02.369 --rc geninfo_all_blocks=1 00:09:02.369 --rc geninfo_unexecuted_blocks=1 00:09:02.369 00:09:02.369 ' 00:09:02.369 04:58:31 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.369 --rc genhtml_branch_coverage=1 00:09:02.369 --rc genhtml_function_coverage=1 00:09:02.369 --rc genhtml_legend=1 00:09:02.369 --rc geninfo_all_blocks=1 00:09:02.369 --rc geninfo_unexecuted_blocks=1 00:09:02.369 00:09:02.369 ' 00:09:02.369 04:58:31 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:02.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.370 --rc genhtml_branch_coverage=1 00:09:02.370 --rc genhtml_function_coverage=1 00:09:02.370 --rc genhtml_legend=1 00:09:02.370 --rc geninfo_all_blocks=1 00:09:02.370 --rc geninfo_unexecuted_blocks=1 00:09:02.370 00:09:02.370 ' 00:09:02.370 04:58:31 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:02.370 04:58:31 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.370 04:58:31 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.370 04:58:31 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.370 04:58:31 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.370 04:58:31 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.370 04:58:31 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.370 04:58:31 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.370 04:58:31 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:02.370 04:58:31 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:02.370 04:58:31 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:02.370 04:58:31 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:02.370 04:58:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:02.370 04:58:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:02.370 04:58:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:02.370 04:58:31 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:02.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:02.370 Waiting for block devices as requested 00:09:02.370 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:02.370 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:02.667 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:02.667 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:07.957 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:07.957 04:58:36 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:07.957 04:58:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:07.957 04:58:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:07.957 04:58:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:07.957 04:58:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.957 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.958 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:07.959 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:07.960 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.961 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:07.962 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:07.963 04:58:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:07.963 04:58:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:07.963 04:58:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:07.963 04:58:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:07.963 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.964 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.965 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:07.966 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.967 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.968 04:58:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.968 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.969 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:07.970 04:58:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:07.970 04:58:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:07.970 04:58:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:07.970 04:58:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:07.970 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.971 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.972 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.973 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.974 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.975 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:07.976 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:07.977 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.978 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.979 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:07.980 04:58:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.981 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:07.982 04:58:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:07.982 04:58:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:07.982 04:58:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:07.982 04:58:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:07.982 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.983 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:07.984 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:07.985 04:58:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:07.985 04:58:37 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:08.245 04:58:37 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:08.245 04:58:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:08.245 04:58:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:08.245 04:58:37 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:08.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:09.070 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:09.070 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:09.070 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:09.070 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:09.070 04:58:38 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:09.070 04:58:38 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:09.070 04:58:38 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.070 04:58:38 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:09.070 ************************************ 00:09:09.070 START TEST nvme_simple_copy 00:09:09.070 ************************************ 00:09:09.070 04:58:38 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:09.329 Initializing NVMe Controllers 00:09:09.329 Attaching to 0000:00:10.0 00:09:09.329 Controller supports SCC. Attached to 0000:00:10.0 00:09:09.329 Namespace ID: 1 size: 6GB 00:09:09.329 Initialization complete. 00:09:09.329 00:09:09.329 Controller QEMU NVMe Ctrl (12340 ) 00:09:09.329 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:09.329 Namespace Block Size:4096 00:09:09.329 Writing LBAs 0 to 63 with Random Data 00:09:09.329 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:09.329 LBAs matching Written Data: 64 00:09:09.329 00:09:09.329 real 0m0.221s 00:09:09.329 user 0m0.078s 00:09:09.329 sys 0m0.042s 00:09:09.329 04:58:38 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.329 04:58:38 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:09.329 ************************************ 00:09:09.329 END TEST nvme_simple_copy 00:09:09.329 ************************************ 00:09:09.329 00:09:09.329 real 0m7.507s 00:09:09.329 user 0m1.075s 00:09:09.329 sys 0m1.333s 00:09:09.329 04:58:38 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.329 04:58:38 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:09.329 ************************************ 00:09:09.329 END TEST nvme_scc 00:09:09.329 ************************************ 00:09:09.329 04:58:38 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:09.329 04:58:38 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:09.329 04:58:38 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:09.329 04:58:38 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:09.329 04:58:38 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:09.329 04:58:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.329 04:58:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.329 04:58:38 -- common/autotest_common.sh@10 -- # set +x 00:09:09.329 ************************************ 00:09:09.329 START TEST nvme_fdp 00:09:09.329 ************************************ 00:09:09.329 04:58:38 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:09.329 * Looking for test storage... 00:09:09.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:09.329 04:58:38 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.329 04:58:38 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.329 04:58:38 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.587 04:58:38 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.587 04:58:38 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:09.587 04:58:38 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.587 04:58:38 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.587 --rc genhtml_branch_coverage=1 00:09:09.587 --rc genhtml_function_coverage=1 00:09:09.587 --rc genhtml_legend=1 00:09:09.587 --rc geninfo_all_blocks=1 00:09:09.587 --rc geninfo_unexecuted_blocks=1 00:09:09.587 00:09:09.587 ' 00:09:09.587 04:58:38 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.587 --rc genhtml_branch_coverage=1 00:09:09.587 --rc genhtml_function_coverage=1 00:09:09.587 --rc genhtml_legend=1 00:09:09.587 --rc geninfo_all_blocks=1 00:09:09.587 --rc geninfo_unexecuted_blocks=1 00:09:09.587 00:09:09.587 ' 00:09:09.587 04:58:38 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.587 --rc genhtml_branch_coverage=1 00:09:09.588 --rc genhtml_function_coverage=1 00:09:09.588 --rc genhtml_legend=1 00:09:09.588 --rc geninfo_all_blocks=1 00:09:09.588 --rc geninfo_unexecuted_blocks=1 00:09:09.588 00:09:09.588 ' 00:09:09.588 04:58:38 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.588 --rc genhtml_branch_coverage=1 00:09:09.588 --rc genhtml_function_coverage=1 00:09:09.588 --rc genhtml_legend=1 00:09:09.588 --rc geninfo_all_blocks=1 00:09:09.588 --rc geninfo_unexecuted_blocks=1 00:09:09.588 00:09:09.588 ' 00:09:09.588 04:58:38 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.588 04:58:38 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.588 04:58:38 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.588 04:58:38 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.588 04:58:38 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.588 04:58:38 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.588 04:58:38 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.588 04:58:38 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.588 04:58:38 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:09.588 04:58:38 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:09.588 04:58:38 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:09.588 04:58:38 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.588 04:58:38 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:09.847 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:09.847 Waiting for block devices as requested 00:09:09.847 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.105 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.105 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.105 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:15.375 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:15.375 04:58:44 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:15.375 04:58:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:15.375 04:58:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:15.375 04:58:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:15.375 04:58:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.375 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.376 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:15.377 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.378 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.379 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.380 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:15.381 04:58:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:15.381 04:58:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:15.381 04:58:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:15.381 04:58:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.381 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.382 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:15.383 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:15.384 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.385 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.386 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:15.387 04:58:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:15.387 04:58:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:15.387 04:58:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:15.387 04:58:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:15.387 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:15.388 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:15.389 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.390 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.391 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.392 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.393 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.394 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.395 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.396 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.397 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.398 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.399 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:15.400 04:58:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:15.400 04:58:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:15.400 04:58:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:15.400 04:58:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:15.400 04:58:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:15.659 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.659 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.659 04:58:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:15.659 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:15.659 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.659 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.660 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:15.661 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:15.662 04:58:44 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:15.662 04:58:44 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:15.663 04:58:44 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:15.663 04:58:44 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:15.663 04:58:44 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:15.663 04:58:44 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:15.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:16.487 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:16.487 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:16.487 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:16.487 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:16.487 04:58:45 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:16.487 04:58:45 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:16.487 04:58:45 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.487 04:58:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:16.487 ************************************ 00:09:16.487 START TEST nvme_flexible_data_placement 00:09:16.487 ************************************ 00:09:16.487 04:58:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:16.746 Initializing NVMe Controllers 00:09:16.746 Attaching to 0000:00:13.0 00:09:16.746 Controller supports FDP Attached to 0000:00:13.0 00:09:16.746 Namespace ID: 1 Endurance Group ID: 1 00:09:16.746 Initialization complete. 00:09:16.746 00:09:16.746 ================================== 00:09:16.746 == FDP tests for Namespace: #01 == 00:09:16.746 ================================== 00:09:16.746 00:09:16.746 Get Feature: FDP: 00:09:16.746 ================= 00:09:16.746 Enabled: Yes 00:09:16.746 FDP configuration Index: 0 00:09:16.746 00:09:16.746 FDP configurations log page 00:09:16.746 =========================== 00:09:16.746 Number of FDP configurations: 1 00:09:16.746 Version: 0 00:09:16.746 Size: 112 00:09:16.746 FDP Configuration Descriptor: 0 00:09:16.746 Descriptor Size: 96 00:09:16.746 Reclaim Group Identifier format: 2 00:09:16.746 FDP Volatile Write Cache: Not Present 00:09:16.746 FDP Configuration: Valid 00:09:16.746 Vendor Specific Size: 0 00:09:16.746 Number of Reclaim Groups: 2 00:09:16.746 Number of Recalim Unit Handles: 8 00:09:16.746 Max Placement Identifiers: 128 00:09:16.746 Number of Namespaces Suppprted: 256 00:09:16.746 Reclaim unit Nominal Size: 6000000 bytes 00:09:16.746 Estimated Reclaim Unit Time Limit: Not Reported 00:09:16.746 RUH Desc #000: RUH Type: Initially Isolated 00:09:16.746 RUH Desc #001: RUH Type: Initially Isolated 00:09:16.746 RUH Desc #002: RUH Type: Initially Isolated 00:09:16.746 RUH Desc #003: RUH Type: Initially Isolated 00:09:16.746 RUH Desc #004: RUH Type: Initially Isolated 00:09:16.746 RUH Desc #005: RUH Type: Initially Isolated 00:09:16.746 RUH Desc #006: RUH Type: Initially Isolated 00:09:16.746 RUH Desc #007: RUH Type: Initially Isolated 00:09:16.746 00:09:16.746 FDP reclaim unit handle usage log page 00:09:16.746 ====================================== 00:09:16.746 Number of Reclaim Unit Handles: 8 00:09:16.746 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:16.746 RUH Usage Desc #001: RUH Attributes: Unused 00:09:16.746 RUH Usage Desc #002: RUH Attributes: Unused 00:09:16.747 RUH Usage Desc #003: RUH Attributes: Unused 00:09:16.747 RUH Usage Desc #004: RUH Attributes: Unused 00:09:16.747 RUH Usage Desc #005: RUH Attributes: Unused 00:09:16.747 RUH Usage Desc #006: RUH Attributes: Unused 00:09:16.747 RUH Usage Desc #007: RUH Attributes: Unused 00:09:16.747 00:09:16.747 FDP statistics log page 00:09:16.747 ======================= 00:09:16.747 Host bytes with metadata written: 2225016832 00:09:16.747 Media bytes with metadata written: 2226208768 00:09:16.747 Media bytes erased: 0 00:09:16.747 00:09:16.747 FDP Reclaim unit handle status 00:09:16.747 ============================== 00:09:16.747 Number of RUHS descriptors: 2 00:09:16.747 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000560f 00:09:16.747 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:16.747 00:09:16.747 FDP write on placement id: 0 success 00:09:16.747 00:09:16.747 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:16.747 00:09:16.747 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:16.747 00:09:16.747 Get Feature: FDP Events for Placement handle: #0 00:09:16.747 ======================== 00:09:16.747 Number of FDP Events: 6 00:09:16.747 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:16.747 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:16.747 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:16.747 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:16.747 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:16.747 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:16.747 00:09:16.747 FDP events log page 00:09:16.747 =================== 00:09:16.747 Number of FDP events: 1 00:09:16.747 FDP Event #0: 00:09:16.747 Event Type: RU Not Written to Capacity 00:09:16.747 Placement Identifier: Valid 00:09:16.747 NSID: Valid 00:09:16.747 Location: Valid 00:09:16.747 Placement Identifier: 0 00:09:16.747 Event Timestamp: 2 00:09:16.747 Namespace Identifier: 1 00:09:16.747 Reclaim Group Identifier: 0 00:09:16.747 Reclaim Unit Handle Identifier: 0 00:09:16.747 00:09:16.747 FDP test passed 00:09:16.747 00:09:16.747 real 0m0.207s 00:09:16.747 user 0m0.061s 00:09:16.747 sys 0m0.045s 00:09:16.747 04:58:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.747 04:58:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:16.747 ************************************ 00:09:16.747 END TEST nvme_flexible_data_placement 00:09:16.747 ************************************ 00:09:16.747 00:09:16.747 real 0m7.423s 00:09:16.747 user 0m1.068s 00:09:16.747 sys 0m1.307s 00:09:16.747 04:58:45 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.747 04:58:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:16.747 ************************************ 00:09:16.747 END TEST nvme_fdp 00:09:16.747 ************************************ 00:09:16.747 04:58:45 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:16.747 04:58:45 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:16.747 04:58:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.747 04:58:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.747 04:58:45 -- common/autotest_common.sh@10 -- # set +x 00:09:16.747 ************************************ 00:09:16.747 START TEST nvme_rpc 00:09:16.747 ************************************ 00:09:16.747 04:58:45 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:16.747 * Looking for test storage... 00:09:16.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:16.747 04:58:46 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.747 04:58:46 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.747 04:58:46 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.060 04:58:46 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:17.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.060 --rc genhtml_branch_coverage=1 00:09:17.060 --rc genhtml_function_coverage=1 00:09:17.060 --rc genhtml_legend=1 00:09:17.060 --rc geninfo_all_blocks=1 00:09:17.060 --rc geninfo_unexecuted_blocks=1 00:09:17.060 00:09:17.060 ' 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:17.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.060 --rc genhtml_branch_coverage=1 00:09:17.060 --rc genhtml_function_coverage=1 00:09:17.060 --rc genhtml_legend=1 00:09:17.060 --rc geninfo_all_blocks=1 00:09:17.060 --rc geninfo_unexecuted_blocks=1 00:09:17.060 00:09:17.060 ' 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:17.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.060 --rc genhtml_branch_coverage=1 00:09:17.060 --rc genhtml_function_coverage=1 00:09:17.060 --rc genhtml_legend=1 00:09:17.060 --rc geninfo_all_blocks=1 00:09:17.060 --rc geninfo_unexecuted_blocks=1 00:09:17.060 00:09:17.060 ' 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:17.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.060 --rc genhtml_branch_coverage=1 00:09:17.060 --rc genhtml_function_coverage=1 00:09:17.060 --rc genhtml_legend=1 00:09:17.060 --rc geninfo_all_blocks=1 00:09:17.060 --rc geninfo_unexecuted_blocks=1 00:09:17.060 00:09:17.060 ' 00:09:17.060 04:58:46 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:17.060 04:58:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:17.060 04:58:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:17.060 04:58:46 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77202 00:09:17.060 04:58:46 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:17.060 04:58:46 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77202 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77202 ']' 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.060 04:58:46 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.060 04:58:46 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:17.060 [2024-11-28 04:58:46.228788] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:17.060 [2024-11-28 04:58:46.228902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77202 ] 00:09:17.346 [2024-11-28 04:58:46.373073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:17.346 [2024-11-28 04:58:46.393250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.346 [2024-11-28 04:58:46.393295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.990 04:58:47 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.990 04:58:47 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:17.990 04:58:47 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:18.247 Nvme0n1 00:09:18.247 04:58:47 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:18.247 04:58:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:18.247 request: 00:09:18.247 { 00:09:18.247 "bdev_name": "Nvme0n1", 00:09:18.247 "filename": "non_existing_file", 00:09:18.247 "method": "bdev_nvme_apply_firmware", 00:09:18.247 "req_id": 1 00:09:18.247 } 00:09:18.247 Got JSON-RPC error response 00:09:18.247 response: 00:09:18.247 { 00:09:18.247 "code": -32603, 00:09:18.247 "message": "open file failed." 00:09:18.248 } 00:09:18.248 04:58:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:18.248 04:58:47 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:18.248 04:58:47 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:18.508 04:58:47 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:18.508 04:58:47 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77202 00:09:18.508 04:58:47 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77202 ']' 00:09:18.508 04:58:47 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77202 00:09:18.508 04:58:47 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:18.508 04:58:47 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.508 04:58:47 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77202 00:09:18.508 04:58:47 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.508 killing process with pid 77202 00:09:18.508 04:58:47 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.508 04:58:47 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77202' 00:09:18.508 04:58:47 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77202 00:09:18.508 04:58:47 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77202 00:09:18.769 00:09:18.769 real 0m2.052s 00:09:18.769 user 0m4.077s 00:09:18.769 sys 0m0.441s 00:09:18.769 04:58:48 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.769 ************************************ 00:09:18.769 END TEST nvme_rpc 00:09:18.769 ************************************ 00:09:18.769 04:58:48 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.030 04:58:48 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:19.030 04:58:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.030 04:58:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.030 04:58:48 -- common/autotest_common.sh@10 -- # set +x 00:09:19.030 ************************************ 00:09:19.030 START TEST nvme_rpc_timeouts 00:09:19.030 ************************************ 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:19.030 * Looking for test storage... 00:09:19.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.030 04:58:48 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.030 --rc genhtml_branch_coverage=1 00:09:19.030 --rc genhtml_function_coverage=1 00:09:19.030 --rc genhtml_legend=1 00:09:19.030 --rc geninfo_all_blocks=1 00:09:19.030 --rc geninfo_unexecuted_blocks=1 00:09:19.030 00:09:19.030 ' 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.030 --rc genhtml_branch_coverage=1 00:09:19.030 --rc genhtml_function_coverage=1 00:09:19.030 --rc genhtml_legend=1 00:09:19.030 --rc geninfo_all_blocks=1 00:09:19.030 --rc geninfo_unexecuted_blocks=1 00:09:19.030 00:09:19.030 ' 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.030 --rc genhtml_branch_coverage=1 00:09:19.030 --rc genhtml_function_coverage=1 00:09:19.030 --rc genhtml_legend=1 00:09:19.030 --rc geninfo_all_blocks=1 00:09:19.030 --rc geninfo_unexecuted_blocks=1 00:09:19.030 00:09:19.030 ' 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.030 --rc genhtml_branch_coverage=1 00:09:19.030 --rc genhtml_function_coverage=1 00:09:19.030 --rc genhtml_legend=1 00:09:19.030 --rc geninfo_all_blocks=1 00:09:19.030 --rc geninfo_unexecuted_blocks=1 00:09:19.030 00:09:19.030 ' 00:09:19.030 04:58:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.030 04:58:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77257 00:09:19.030 04:58:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77257 00:09:19.030 04:58:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77289 00:09:19.030 04:58:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:19.030 04:58:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:19.030 04:58:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77289 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77289 ']' 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.030 04:58:48 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:19.292 [2024-11-28 04:58:48.317352] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:19.292 [2024-11-28 04:58:48.317480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77289 ] 00:09:19.292 [2024-11-28 04:58:48.461590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:19.292 [2024-11-28 04:58:48.484161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.292 [2024-11-28 04:58:48.484280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.866 04:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.866 04:58:49 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:19.866 Checking default timeout settings: 00:09:19.866 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:19.866 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:20.439 Making settings changes with rpc: 00:09:20.439 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:20.439 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:20.439 Check default vs. modified settings: 00:09:20.439 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:20.439 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77257 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77257 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:20.700 Setting action_on_timeout is changed as expected. 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:20.700 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77257 00:09:20.960 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:20.961 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.961 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:20.961 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.961 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:20.961 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77257 00:09:20.961 Setting timeout_us is changed as expected. 00:09:20.961 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:20.961 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:20.961 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:20.961 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:20.961 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77257 00:09:20.961 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:20.961 04:58:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.961 04:58:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:20.961 04:58:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77257 00:09:20.961 04:58:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:20.961 04:58:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.961 Setting timeout_admin_us is changed as expected. 00:09:20.961 04:58:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:20.961 04:58:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:20.961 04:58:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:20.961 04:58:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:20.961 04:58:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77257 /tmp/settings_modified_77257 00:09:20.961 04:58:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77289 00:09:20.961 04:58:50 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77289 ']' 00:09:20.961 04:58:50 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77289 00:09:20.961 04:58:50 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:20.961 04:58:50 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.961 04:58:50 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77289 00:09:20.961 killing process with pid 77289 00:09:20.961 04:58:50 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.961 04:58:50 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.961 04:58:50 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77289' 00:09:20.961 04:58:50 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77289 00:09:20.961 04:58:50 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77289 00:09:21.222 RPC TIMEOUT SETTING TEST PASSED. 00:09:21.222 04:58:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:21.222 00:09:21.222 real 0m2.215s 00:09:21.222 user 0m4.401s 00:09:21.222 sys 0m0.482s 00:09:21.222 ************************************ 00:09:21.222 END TEST nvme_rpc_timeouts 00:09:21.222 ************************************ 00:09:21.222 04:58:50 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.222 04:58:50 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:21.222 04:58:50 -- spdk/autotest.sh@239 -- # uname -s 00:09:21.222 04:58:50 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:21.222 04:58:50 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:21.222 04:58:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.222 04:58:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.222 04:58:50 -- common/autotest_common.sh@10 -- # set +x 00:09:21.222 ************************************ 00:09:21.222 START TEST sw_hotplug 00:09:21.222 ************************************ 00:09:21.222 04:58:50 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:21.222 * Looking for test storage... 00:09:21.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.222 04:58:50 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.222 04:58:50 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.223 04:58:50 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.223 04:58:50 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.223 04:58:50 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:21.484 04:58:50 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.484 04:58:50 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.484 04:58:50 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.484 04:58:50 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:21.484 04:58:50 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.484 04:58:50 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.484 --rc genhtml_branch_coverage=1 00:09:21.484 --rc genhtml_function_coverage=1 00:09:21.484 --rc genhtml_legend=1 00:09:21.484 --rc geninfo_all_blocks=1 00:09:21.484 --rc geninfo_unexecuted_blocks=1 00:09:21.484 00:09:21.484 ' 00:09:21.484 04:58:50 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.484 --rc genhtml_branch_coverage=1 00:09:21.484 --rc genhtml_function_coverage=1 00:09:21.484 --rc genhtml_legend=1 00:09:21.484 --rc geninfo_all_blocks=1 00:09:21.484 --rc geninfo_unexecuted_blocks=1 00:09:21.484 00:09:21.484 ' 00:09:21.484 04:58:50 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.484 --rc genhtml_branch_coverage=1 00:09:21.484 --rc genhtml_function_coverage=1 00:09:21.484 --rc genhtml_legend=1 00:09:21.484 --rc geninfo_all_blocks=1 00:09:21.484 --rc geninfo_unexecuted_blocks=1 00:09:21.484 00:09:21.484 ' 00:09:21.484 04:58:50 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.484 --rc genhtml_branch_coverage=1 00:09:21.484 --rc genhtml_function_coverage=1 00:09:21.484 --rc genhtml_legend=1 00:09:21.484 --rc geninfo_all_blocks=1 00:09:21.484 --rc geninfo_unexecuted_blocks=1 00:09:21.484 00:09:21.484 ' 00:09:21.484 04:58:50 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:21.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.746 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:21.746 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:21.746 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:21.746 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:21.746 04:58:50 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:21.746 04:58:50 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:21.746 04:58:50 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:21.746 04:58:50 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:21.746 04:58:50 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:21.746 04:58:51 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:21.746 04:58:51 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:21.746 04:58:51 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:22.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.318 Waiting for block devices as requested 00:09:22.318 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.318 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.578 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.578 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.862 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:27.862 04:58:56 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:27.862 04:58:56 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:28.123 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:28.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.123 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:28.384 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:28.645 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.645 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.645 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:28.645 04:58:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:28.904 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:28.904 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:28.904 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78133 00:09:28.904 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:28.904 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:28.904 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:28.904 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:28.904 04:58:57 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:28.904 04:58:57 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:28.904 04:58:57 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:28.904 04:58:57 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:28.904 04:58:57 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:28.904 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:28.904 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:28.904 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:28.904 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:28.904 04:58:57 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:28.904 Initializing NVMe Controllers 00:09:28.904 Attaching to 0000:00:10.0 00:09:28.904 Attaching to 0000:00:11.0 00:09:28.904 Attached to 0000:00:11.0 00:09:28.904 Attached to 0000:00:10.0 00:09:28.904 Initialization complete. Starting I/O... 00:09:28.904 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:28.904 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:28.904 00:09:30.281 QEMU NVMe Ctrl (12341 ): 2550 I/Os completed (+2550) 00:09:30.281 QEMU NVMe Ctrl (12340 ): 2707 I/Os completed (+2707) 00:09:30.281 00:09:31.211 QEMU NVMe Ctrl (12341 ): 7106 I/Os completed (+4556) 00:09:31.211 QEMU NVMe Ctrl (12340 ): 7768 I/Os completed (+5061) 00:09:31.211 00:09:32.142 QEMU NVMe Ctrl (12341 ): 11278 I/Os completed (+4172) 00:09:32.142 QEMU NVMe Ctrl (12340 ): 11842 I/Os completed (+4074) 00:09:32.142 00:09:33.072 QEMU NVMe Ctrl (12341 ): 15506 I/Os completed (+4228) 00:09:33.072 QEMU NVMe Ctrl (12340 ): 15988 I/Os completed (+4146) 00:09:33.072 00:09:34.004 QEMU NVMe Ctrl (12341 ): 19695 I/Os completed (+4189) 00:09:34.004 QEMU NVMe Ctrl (12340 ): 20141 I/Os completed (+4153) 00:09:34.004 00:09:34.946 04:59:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:34.946 04:59:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:34.946 04:59:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:34.946 [2024-11-28 04:59:03.976917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:34.946 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:34.946 [2024-11-28 04:59:03.978280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 [2024-11-28 04:59:03.978347] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 [2024-11-28 04:59:03.978364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 [2024-11-28 04:59:03.978379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:34.946 [2024-11-28 04:59:03.979526] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 [2024-11-28 04:59:03.979563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 [2024-11-28 04:59:03.979575] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 [2024-11-28 04:59:03.979588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 EAL: Cannot open sysfs resource 00:09:34.946 EAL: pci_scan_one(): cannot parse resource 00:09:34.946 EAL: Scan for (pci) bus failed. 00:09:34.946 04:59:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:34.946 04:59:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:34.946 [2024-11-28 04:59:03.999394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:34.946 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:34.946 [2024-11-28 04:59:04.000470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 [2024-11-28 04:59:04.000536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 [2024-11-28 04:59:04.000619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 [2024-11-28 04:59:04.000650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:34.946 [2024-11-28 04:59:04.001774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 [2024-11-28 04:59:04.001807] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 [2024-11-28 04:59:04.001824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 [2024-11-28 04:59:04.001835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.946 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:34.946 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:34.946 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:34.946 EAL: Scan for (pci) bus failed. 00:09:34.946 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:34.946 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:34.946 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:34.946 00:09:34.946 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:34.946 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:34.946 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:34.946 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:34.946 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:34.946 Attaching to 0000:00:10.0 00:09:34.946 Attached to 0000:00:10.0 00:09:35.207 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:35.207 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:35.207 04:59:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:35.207 Attaching to 0000:00:11.0 00:09:35.207 Attached to 0000:00:11.0 00:09:36.150 QEMU NVMe Ctrl (12340 ): 3563 I/Os completed (+3563) 00:09:36.150 QEMU NVMe Ctrl (12341 ): 3243 I/Os completed (+3243) 00:09:36.150 00:09:37.087 QEMU NVMe Ctrl (12340 ): 7463 I/Os completed (+3900) 00:09:37.087 QEMU NVMe Ctrl (12341 ): 7131 I/Os completed (+3888) 00:09:37.087 00:09:38.023 QEMU NVMe Ctrl (12340 ): 11739 I/Os completed (+4276) 00:09:38.023 QEMU NVMe Ctrl (12341 ): 11451 I/Os completed (+4320) 00:09:38.023 00:09:38.965 QEMU NVMe Ctrl (12340 ): 15543 I/Os completed (+3804) 00:09:38.965 QEMU NVMe Ctrl (12341 ): 15279 I/Os completed (+3828) 00:09:38.965 00:09:39.907 QEMU NVMe Ctrl (12340 ): 19171 I/Os completed (+3628) 00:09:39.907 QEMU NVMe Ctrl (12341 ): 18952 I/Os completed (+3673) 00:09:39.907 00:09:41.281 QEMU NVMe Ctrl (12340 ): 23191 I/Os completed (+4020) 00:09:41.281 QEMU NVMe Ctrl (12341 ): 23000 I/Os completed (+4048) 00:09:41.281 00:09:42.215 QEMU NVMe Ctrl (12340 ): 27428 I/Os completed (+4237) 00:09:42.215 QEMU NVMe Ctrl (12341 ): 27270 I/Os completed (+4270) 00:09:42.215 00:09:43.157 QEMU NVMe Ctrl (12340 ): 31430 I/Os completed (+4002) 00:09:43.157 QEMU NVMe Ctrl (12341 ): 31337 I/Os completed (+4067) 00:09:43.157 00:09:44.101 QEMU NVMe Ctrl (12340 ): 35134 I/Os completed (+3704) 00:09:44.102 QEMU NVMe Ctrl (12341 ): 35059 I/Os completed (+3722) 00:09:44.102 00:09:45.046 QEMU NVMe Ctrl (12340 ): 38854 I/Os completed (+3720) 00:09:45.046 QEMU NVMe Ctrl (12341 ): 38781 I/Os completed (+3722) 00:09:45.046 00:09:46.118 QEMU NVMe Ctrl (12340 ): 42647 I/Os completed (+3793) 00:09:46.118 QEMU NVMe Ctrl (12341 ): 42389 I/Os completed (+3608) 00:09:46.118 00:09:47.060 QEMU NVMe Ctrl (12340 ): 46351 I/Os completed (+3704) 00:09:47.060 QEMU NVMe Ctrl (12341 ): 46171 I/Os completed (+3782) 00:09:47.060 00:09:47.060 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:47.060 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:47.060 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:47.060 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:47.060 [2024-11-28 04:59:16.294281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:47.060 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:47.060 [2024-11-28 04:59:16.295334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 [2024-11-28 04:59:16.295449] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 [2024-11-28 04:59:16.295485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 [2024-11-28 04:59:16.295520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:47.060 [2024-11-28 04:59:16.296738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 [2024-11-28 04:59:16.296882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 [2024-11-28 04:59:16.296919] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 [2024-11-28 04:59:16.296977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:47.060 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:47.060 [2024-11-28 04:59:16.312865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:47.060 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:47.060 [2024-11-28 04:59:16.313876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 [2024-11-28 04:59:16.313968] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 [2024-11-28 04:59:16.314001] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 [2024-11-28 04:59:16.314026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:47.060 [2024-11-28 04:59:16.315131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 [2024-11-28 04:59:16.315205] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 [2024-11-28 04:59:16.315240] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 [2024-11-28 04:59:16.315272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.060 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:47.060 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:47.060 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:47.060 EAL: Scan for (pci) bus failed. 00:09:47.321 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:47.321 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:47.321 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:47.321 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:47.321 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:47.321 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:47.321 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:47.321 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:47.321 Attaching to 0000:00:10.0 00:09:47.321 Attached to 0000:00:10.0 00:09:47.321 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:47.582 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:47.582 04:59:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:47.582 Attaching to 0000:00:11.0 00:09:47.582 Attached to 0000:00:11.0 00:09:48.154 QEMU NVMe Ctrl (12340 ): 2359 I/Os completed (+2359) 00:09:48.154 QEMU NVMe Ctrl (12341 ): 2063 I/Os completed (+2063) 00:09:48.154 00:09:49.099 QEMU NVMe Ctrl (12340 ): 6063 I/Os completed (+3704) 00:09:49.099 QEMU NVMe Ctrl (12341 ): 5788 I/Os completed (+3725) 00:09:49.099 00:09:50.045 QEMU NVMe Ctrl (12340 ): 9279 I/Os completed (+3216) 00:09:50.045 QEMU NVMe Ctrl (12341 ): 9013 I/Os completed (+3225) 00:09:50.045 00:09:50.990 QEMU NVMe Ctrl (12340 ): 12327 I/Os completed (+3048) 00:09:50.990 QEMU NVMe Ctrl (12341 ): 12056 I/Os completed (+3043) 00:09:50.990 00:09:51.935 QEMU NVMe Ctrl (12340 ): 15919 I/Os completed (+3592) 00:09:51.936 QEMU NVMe Ctrl (12341 ): 15656 I/Os completed (+3600) 00:09:51.936 00:09:52.880 QEMU NVMe Ctrl (12340 ): 19067 I/Os completed (+3148) 00:09:52.880 QEMU NVMe Ctrl (12341 ): 18805 I/Os completed (+3149) 00:09:52.880 00:09:54.269 QEMU NVMe Ctrl (12340 ): 22247 I/Os completed (+3180) 00:09:54.269 QEMU NVMe Ctrl (12341 ): 21985 I/Os completed (+3180) 00:09:54.269 00:09:55.213 QEMU NVMe Ctrl (12340 ): 25467 I/Os completed (+3220) 00:09:55.213 QEMU NVMe Ctrl (12341 ): 25217 I/Os completed (+3232) 00:09:55.213 00:09:56.157 QEMU NVMe Ctrl (12340 ): 28587 I/Os completed (+3120) 00:09:56.157 QEMU NVMe Ctrl (12341 ): 28337 I/Os completed (+3120) 00:09:56.157 00:09:57.104 QEMU NVMe Ctrl (12340 ): 31683 I/Os completed (+3096) 00:09:57.104 QEMU NVMe Ctrl (12341 ): 31433 I/Os completed (+3096) 00:09:57.104 00:09:58.052 QEMU NVMe Ctrl (12340 ): 34719 I/Os completed (+3036) 00:09:58.052 QEMU NVMe Ctrl (12341 ): 34469 I/Os completed (+3036) 00:09:58.052 00:09:58.999 QEMU NVMe Ctrl (12340 ): 37783 I/Os completed (+3064) 00:09:58.999 QEMU NVMe Ctrl (12341 ): 37533 I/Os completed (+3064) 00:09:58.999 00:09:59.571 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:59.571 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:59.571 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.571 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.571 [2024-11-28 04:59:28.620263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:59.571 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:59.571 [2024-11-28 04:59:28.621820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.571 [2024-11-28 04:59:28.621936] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.571 [2024-11-28 04:59:28.621972] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.571 [2024-11-28 04:59:28.622083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.571 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:59.571 [2024-11-28 04:59:28.624240] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.571 [2024-11-28 04:59:28.624302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.572 [2024-11-28 04:59:28.624316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.572 [2024-11-28 04:59:28.624332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.572 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.572 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.572 [2024-11-28 04:59:28.644225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:59.572 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:59.572 [2024-11-28 04:59:28.645627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.572 [2024-11-28 04:59:28.645825] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.572 [2024-11-28 04:59:28.645861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.572 [2024-11-28 04:59:28.646053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.572 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:59.572 [2024-11-28 04:59:28.649747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.572 [2024-11-28 04:59:28.650373] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.572 [2024-11-28 04:59:28.650412] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.572 [2024-11-28 04:59:28.650427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.572 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:59.572 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:59.572 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.572 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.572 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:59.832 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:59.832 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.832 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.832 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.832 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:59.832 Attaching to 0000:00:10.0 00:09:59.832 Attached to 0000:00:10.0 00:09:59.832 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:59.832 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.832 04:59:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:59.832 Attaching to 0000:00:11.0 00:09:59.832 Attached to 0000:00:11.0 00:09:59.832 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:59.832 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:59.832 [2024-11-28 04:59:28.999462] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:12.074 04:59:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:12.074 04:59:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:12.074 04:59:41 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.03 00:10:12.074 04:59:41 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.03 00:10:12.074 04:59:41 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:12.074 04:59:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.03 00:10:12.074 04:59:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.03 2 00:10:12.074 remove_attach_helper took 43.03s to complete (handling 2 nvme drive(s)) 04:59:41 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78133 00:10:18.748 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78133) - No such process 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78133 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=78687 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 78687 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 78687 ']' 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:18.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:18.748 [2024-11-28 04:59:47.100641] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:10:18.748 [2024-11-28 04:59:47.100797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78687 ] 00:10:18.748 [2024-11-28 04:59:47.248864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.748 [2024-11-28 04:59:47.277484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:18.748 04:59:47 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:18.748 04:59:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:25.318 04:59:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.318 04:59:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.318 04:59:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:25.318 04:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:25.318 [2024-11-28 04:59:53.997695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:25.318 [2024-11-28 04:59:53.998794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.318 [2024-11-28 04:59:53.998827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.318 [2024-11-28 04:59:53.998845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.318 [2024-11-28 04:59:53.998858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.318 [2024-11-28 04:59:53.998866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.318 [2024-11-28 04:59:53.998874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.318 [2024-11-28 04:59:53.998883] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.318 [2024-11-28 04:59:53.998890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.318 [2024-11-28 04:59:53.998899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.318 [2024-11-28 04:59:53.998905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.318 [2024-11-28 04:59:53.998914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.319 [2024-11-28 04:59:53.998920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.319 [2024-11-28 04:59:54.397692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:25.319 [2024-11-28 04:59:54.398843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.319 [2024-11-28 04:59:54.398875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.319 [2024-11-28 04:59:54.398884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.319 [2024-11-28 04:59:54.398896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.319 [2024-11-28 04:59:54.398903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.319 [2024-11-28 04:59:54.398910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.319 [2024-11-28 04:59:54.398917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.319 [2024-11-28 04:59:54.398925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.319 [2024-11-28 04:59:54.398931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.319 [2024-11-28 04:59:54.398941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.319 [2024-11-28 04:59:54.398947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.319 [2024-11-28 04:59:54.398955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.319 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:25.319 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:25.319 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:25.319 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:25.319 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:25.319 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:25.319 04:59:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.319 04:59:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.319 04:59:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.319 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:25.319 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:25.319 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.319 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.319 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:25.575 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:25.575 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.575 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.575 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.575 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:25.575 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:25.575 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.575 04:59:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.771 05:00:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.771 05:00:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.771 05:00:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.771 05:00:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.771 05:00:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.771 05:00:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:37.771 05:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:37.771 [2024-11-28 05:00:06.897866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:37.771 [2024-11-28 05:00:06.899006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.771 [2024-11-28 05:00:06.899101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.771 [2024-11-28 05:00:06.899160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.771 [2024-11-28 05:00:06.899262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.771 [2024-11-28 05:00:06.899284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.771 [2024-11-28 05:00:06.899340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.771 [2024-11-28 05:00:06.899369] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.771 [2024-11-28 05:00:06.899512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.771 [2024-11-28 05:00:06.899622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.771 [2024-11-28 05:00:06.899647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.771 [2024-11-28 05:00:06.899666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.771 [2024-11-28 05:00:06.899721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.029 [2024-11-28 05:00:07.297873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:38.029 [2024-11-28 05:00:07.298983] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.029 [2024-11-28 05:00:07.299083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.029 [2024-11-28 05:00:07.299144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.029 [2024-11-28 05:00:07.299174] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.029 [2024-11-28 05:00:07.299230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.029 [2024-11-28 05:00:07.299258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.029 [2024-11-28 05:00:07.299320] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.029 [2024-11-28 05:00:07.299341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.029 [2024-11-28 05:00:07.299364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.029 [2024-11-28 05:00:07.299389] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.029 [2024-11-28 05:00:07.299404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.029 [2024-11-28 05:00:07.299430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.287 05:00:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.287 05:00:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.287 05:00:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.287 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:38.587 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:38.587 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.587 05:00:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.808 05:00:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.808 05:00:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.808 05:00:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.808 [2024-11-28 05:00:19.698069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:50.808 [2024-11-28 05:00:19.699363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.808 [2024-11-28 05:00:19.699465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.808 [2024-11-28 05:00:19.699531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.808 [2024-11-28 05:00:19.699591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.808 [2024-11-28 05:00:19.699612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.808 [2024-11-28 05:00:19.699660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.808 [2024-11-28 05:00:19.699713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.808 [2024-11-28 05:00:19.699731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.808 [2024-11-28 05:00:19.699790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.808 [2024-11-28 05:00:19.699841] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.808 [2024-11-28 05:00:19.699861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.808 [2024-11-28 05:00:19.699909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.808 05:00:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.808 05:00:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.808 05:00:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:50.808 05:00:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:51.066 [2024-11-28 05:00:20.098073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:51.066 [2024-11-28 05:00:20.099068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.066 [2024-11-28 05:00:20.099099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.066 [2024-11-28 05:00:20.099108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.066 [2024-11-28 05:00:20.099121] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.066 [2024-11-28 05:00:20.099127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.066 [2024-11-28 05:00:20.099137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.066 [2024-11-28 05:00:20.099143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.066 [2024-11-28 05:00:20.099151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.066 [2024-11-28 05:00:20.099158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.066 [2024-11-28 05:00:20.099167] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.066 [2024-11-28 05:00:20.099173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.066 [2024-11-28 05:00:20.099195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.066 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:51.066 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.066 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.066 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.066 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.066 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.066 05:00:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.067 05:00:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.067 05:00:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.067 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:51.067 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:51.325 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.325 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.325 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:51.325 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:51.325 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.325 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.325 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.325 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:51.325 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:51.325 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.325 05:00:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.67 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.67 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.67 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.67 2 00:11:03.525 remove_attach_helper took 44.67s to complete (handling 2 nvme drive(s)) 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:03.525 05:00:32 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:03.525 05:00:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:10.082 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.082 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.082 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.082 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.082 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.082 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:10.082 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.082 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.082 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.083 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.083 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.083 05:00:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.083 05:00:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.083 05:00:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.083 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:10.083 05:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.083 [2024-11-28 05:00:38.697886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:10.083 [2024-11-28 05:00:38.698703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.083 [2024-11-28 05:00:38.698733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.083 [2024-11-28 05:00:38.698745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.083 [2024-11-28 05:00:38.698757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.083 [2024-11-28 05:00:38.698765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.083 [2024-11-28 05:00:38.698771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.083 [2024-11-28 05:00:38.698779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.083 [2024-11-28 05:00:38.698786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.083 [2024-11-28 05:00:38.698796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.083 [2024-11-28 05:00:38.698802] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.083 [2024-11-28 05:00:38.698810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.083 [2024-11-28 05:00:38.698816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.083 [2024-11-28 05:00:39.097893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:10.083 [2024-11-28 05:00:39.098612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.083 [2024-11-28 05:00:39.098641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.083 [2024-11-28 05:00:39.098650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.083 [2024-11-28 05:00:39.098660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.083 [2024-11-28 05:00:39.098667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.083 [2024-11-28 05:00:39.098676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.083 [2024-11-28 05:00:39.098682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.083 [2024-11-28 05:00:39.098690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.083 [2024-11-28 05:00:39.098697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.083 [2024-11-28 05:00:39.098704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.083 [2024-11-28 05:00:39.098711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.083 [2024-11-28 05:00:39.098720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.083 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:10.083 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.083 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.083 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.083 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.083 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.083 05:00:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.083 05:00:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.083 05:00:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.083 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:10.083 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:10.083 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.083 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.083 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:10.083 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:10.341 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.341 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.341 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.341 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:10.341 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:10.341 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.341 05:00:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.539 05:00:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.539 05:00:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.539 05:00:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:22.539 [2024-11-28 05:00:51.498088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:22.539 [2024-11-28 05:00:51.500239] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.539 [2024-11-28 05:00:51.500267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.539 [2024-11-28 05:00:51.500279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.539 [2024-11-28 05:00:51.500291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.539 [2024-11-28 05:00:51.500299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.539 [2024-11-28 05:00:51.500306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.539 [2024-11-28 05:00:51.500313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.539 [2024-11-28 05:00:51.500320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.539 [2024-11-28 05:00:51.500329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.539 [2024-11-28 05:00:51.500335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.539 [2024-11-28 05:00:51.500343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.539 [2024-11-28 05:00:51.500349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.539 05:00:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.539 05:00:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.539 05:00:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:22.539 05:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:22.797 [2024-11-28 05:00:51.898097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:22.797 [2024-11-28 05:00:51.898829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.797 [2024-11-28 05:00:51.898862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.797 [2024-11-28 05:00:51.898872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.797 [2024-11-28 05:00:51.898884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.797 [2024-11-28 05:00:51.898891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.797 [2024-11-28 05:00:51.898899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.797 [2024-11-28 05:00:51.898906] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.797 [2024-11-28 05:00:51.898914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.797 [2024-11-28 05:00:51.898920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.797 [2024-11-28 05:00:51.898927] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.797 [2024-11-28 05:00:51.898934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.797 [2024-11-28 05:00:51.898941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.797 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:22.797 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:22.797 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:22.797 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.797 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.797 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.797 05:00:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.797 05:00:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.797 05:00:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.056 05:00:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.255 05:01:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.255 05:01:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.255 05:01:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:35.255 [2024-11-28 05:01:04.398300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:35.255 [2024-11-28 05:01:04.399139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.255 [2024-11-28 05:01:04.399165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.255 [2024-11-28 05:01:04.399187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.255 [2024-11-28 05:01:04.399200] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.255 [2024-11-28 05:01:04.399211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.255 [2024-11-28 05:01:04.399218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.255 [2024-11-28 05:01:04.399226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.255 [2024-11-28 05:01:04.399233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.255 [2024-11-28 05:01:04.399241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.255 [2024-11-28 05:01:04.399247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.255 [2024-11-28 05:01:04.399254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.255 [2024-11-28 05:01:04.399261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.255 05:01:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.255 05:01:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.255 05:01:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:35.255 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:35.822 [2024-11-28 05:01:04.798306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:35.822 [2024-11-28 05:01:04.799015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.822 [2024-11-28 05:01:04.799046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.822 [2024-11-28 05:01:04.799056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.822 [2024-11-28 05:01:04.799069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.822 [2024-11-28 05:01:04.799076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.822 [2024-11-28 05:01:04.799084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.822 [2024-11-28 05:01:04.799091] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.822 [2024-11-28 05:01:04.799103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.822 [2024-11-28 05:01:04.799109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.822 [2024-11-28 05:01:04.799117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.822 [2024-11-28 05:01:04.799123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.822 [2024-11-28 05:01:04.799131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.822 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:35.822 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.822 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.822 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.822 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.822 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.822 05:01:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.822 05:01:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.822 05:01:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.822 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:35.822 05:01:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:35.822 05:01:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:35.822 05:01:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:35.822 05:01:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:35.822 05:01:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.079 05:01:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.079 05:01:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.079 05:01:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.079 05:01:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.079 05:01:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.079 05:01:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.079 05:01:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:48.280 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:48.280 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:48.280 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:48.280 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.280 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.280 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.280 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:48.280 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.62 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.62 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:48.280 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.62 00:11:48.280 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.62 2 00:11:48.280 remove_attach_helper took 44.62s to complete (handling 2 nvme drive(s)) 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:48.280 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 78687 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 78687 ']' 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 78687 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78687 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.280 killing process with pid 78687 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78687' 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@973 -- # kill 78687 00:11:48.280 05:01:17 sw_hotplug -- common/autotest_common.sh@978 -- # wait 78687 00:11:48.280 05:01:17 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:48.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:49.117 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:49.117 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:49.117 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:49.380 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:49.380 00:11:49.380 real 2m28.110s 00:11:49.380 user 1m47.796s 00:11:49.380 sys 0m18.777s 00:11:49.380 05:01:18 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.380 ************************************ 00:11:49.380 END TEST sw_hotplug 00:11:49.380 ************************************ 00:11:49.380 05:01:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.380 05:01:18 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:49.380 05:01:18 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:49.380 05:01:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:49.380 05:01:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.380 05:01:18 -- common/autotest_common.sh@10 -- # set +x 00:11:49.380 ************************************ 00:11:49.380 START TEST nvme_xnvme 00:11:49.380 ************************************ 00:11:49.380 05:01:18 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:49.380 * Looking for test storage... 00:11:49.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:49.380 05:01:18 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:49.380 05:01:18 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:49.380 05:01:18 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.645 05:01:18 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:49.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.645 --rc genhtml_branch_coverage=1 00:11:49.645 --rc genhtml_function_coverage=1 00:11:49.645 --rc genhtml_legend=1 00:11:49.645 --rc geninfo_all_blocks=1 00:11:49.645 --rc geninfo_unexecuted_blocks=1 00:11:49.645 00:11:49.645 ' 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:49.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.645 --rc genhtml_branch_coverage=1 00:11:49.645 --rc genhtml_function_coverage=1 00:11:49.645 --rc genhtml_legend=1 00:11:49.645 --rc geninfo_all_blocks=1 00:11:49.645 --rc geninfo_unexecuted_blocks=1 00:11:49.645 00:11:49.645 ' 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:49.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.645 --rc genhtml_branch_coverage=1 00:11:49.645 --rc genhtml_function_coverage=1 00:11:49.645 --rc genhtml_legend=1 00:11:49.645 --rc geninfo_all_blocks=1 00:11:49.645 --rc geninfo_unexecuted_blocks=1 00:11:49.645 00:11:49.645 ' 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:49.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.645 --rc genhtml_branch_coverage=1 00:11:49.645 --rc genhtml_function_coverage=1 00:11:49.645 --rc genhtml_legend=1 00:11:49.645 --rc geninfo_all_blocks=1 00:11:49.645 --rc geninfo_unexecuted_blocks=1 00:11:49.645 00:11:49.645 ' 00:11:49.645 05:01:18 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:49.645 05:01:18 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:49.645 05:01:18 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:49.645 05:01:18 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:49.646 05:01:18 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:49.646 05:01:18 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:49.646 05:01:18 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:49.646 #define SPDK_CONFIG_H 00:11:49.646 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:49.646 #define SPDK_CONFIG_APPS 1 00:11:49.646 #define SPDK_CONFIG_ARCH native 00:11:49.646 #define SPDK_CONFIG_ASAN 1 00:11:49.646 #undef SPDK_CONFIG_AVAHI 00:11:49.646 #undef SPDK_CONFIG_CET 00:11:49.646 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:49.646 #define SPDK_CONFIG_COVERAGE 1 00:11:49.646 #define SPDK_CONFIG_CROSS_PREFIX 00:11:49.646 #undef SPDK_CONFIG_CRYPTO 00:11:49.646 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:49.646 #undef SPDK_CONFIG_CUSTOMOCF 00:11:49.646 #undef SPDK_CONFIG_DAOS 00:11:49.646 #define SPDK_CONFIG_DAOS_DIR 00:11:49.646 #define SPDK_CONFIG_DEBUG 1 00:11:49.646 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:49.646 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:11:49.646 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:11:49.646 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:11:49.646 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:49.646 #undef SPDK_CONFIG_DPDK_UADK 00:11:49.646 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:49.646 #define SPDK_CONFIG_EXAMPLES 1 00:11:49.646 #undef SPDK_CONFIG_FC 00:11:49.646 #define SPDK_CONFIG_FC_PATH 00:11:49.646 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:49.646 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:49.646 #define SPDK_CONFIG_FSDEV 1 00:11:49.646 #undef SPDK_CONFIG_FUSE 00:11:49.646 #undef SPDK_CONFIG_FUZZER 00:11:49.646 #define SPDK_CONFIG_FUZZER_LIB 00:11:49.646 #undef SPDK_CONFIG_GOLANG 00:11:49.646 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:49.646 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:49.646 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:49.646 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:49.646 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:49.646 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:49.646 #undef SPDK_CONFIG_HAVE_LZ4 00:11:49.646 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:49.646 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:49.646 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:49.646 #define SPDK_CONFIG_IDXD 1 00:11:49.646 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:49.646 #undef SPDK_CONFIG_IPSEC_MB 00:11:49.646 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:49.646 #define SPDK_CONFIG_ISAL 1 00:11:49.646 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:49.646 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:49.646 #define SPDK_CONFIG_LIBDIR 00:11:49.646 #undef SPDK_CONFIG_LTO 00:11:49.646 #define SPDK_CONFIG_MAX_LCORES 128 00:11:49.646 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:49.646 #define SPDK_CONFIG_NVME_CUSE 1 00:11:49.646 #undef SPDK_CONFIG_OCF 00:11:49.646 #define SPDK_CONFIG_OCF_PATH 00:11:49.646 #define SPDK_CONFIG_OPENSSL_PATH 00:11:49.646 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:49.646 #define SPDK_CONFIG_PGO_DIR 00:11:49.646 #undef SPDK_CONFIG_PGO_USE 00:11:49.646 #define SPDK_CONFIG_PREFIX /usr/local 00:11:49.646 #undef SPDK_CONFIG_RAID5F 00:11:49.646 #undef SPDK_CONFIG_RBD 00:11:49.646 #define SPDK_CONFIG_RDMA 1 00:11:49.646 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:49.646 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:49.646 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:49.647 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:49.647 #define SPDK_CONFIG_SHARED 1 00:11:49.647 #undef SPDK_CONFIG_SMA 00:11:49.647 #define SPDK_CONFIG_TESTS 1 00:11:49.647 #undef SPDK_CONFIG_TSAN 00:11:49.647 #define SPDK_CONFIG_UBLK 1 00:11:49.647 #define SPDK_CONFIG_UBSAN 1 00:11:49.647 #undef SPDK_CONFIG_UNIT_TESTS 00:11:49.647 #undef SPDK_CONFIG_URING 00:11:49.647 #define SPDK_CONFIG_URING_PATH 00:11:49.647 #undef SPDK_CONFIG_URING_ZNS 00:11:49.647 #undef SPDK_CONFIG_USDT 00:11:49.647 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:49.647 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:49.647 #undef SPDK_CONFIG_VFIO_USER 00:11:49.647 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:49.647 #define SPDK_CONFIG_VHOST 1 00:11:49.647 #define SPDK_CONFIG_VIRTIO 1 00:11:49.647 #undef SPDK_CONFIG_VTUNE 00:11:49.647 #define SPDK_CONFIG_VTUNE_DIR 00:11:49.647 #define SPDK_CONFIG_WERROR 1 00:11:49.647 #define SPDK_CONFIG_WPDK_DIR 00:11:49.647 #define SPDK_CONFIG_XNVME 1 00:11:49.647 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:49.647 05:01:18 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.647 05:01:18 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.647 05:01:18 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.647 05:01:18 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.647 05:01:18 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.647 05:01:18 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.647 05:01:18 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.647 05:01:18 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.647 05:01:18 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:49.647 05:01:18 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:49.647 05:01:18 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:49.647 05:01:18 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:49.648 05:01:18 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 80015 ]] 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 80015 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.2F1omo 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.2F1omo/tests/xnvme /tmp/spdk.2F1omo 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13343879168 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6238752768 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261960704 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13343879168 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6238752768 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265237504 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98440028160 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1262751744 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:49.649 * Looking for test storage... 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13343879168 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:49.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:49.649 05:01:18 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:49.649 05:01:18 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.649 05:01:18 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.649 05:01:18 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.649 05:01:18 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.649 05:01:18 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.649 05:01:18 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.649 05:01:18 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.649 05:01:18 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.649 05:01:18 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:49.650 05:01:18 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.650 05:01:18 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:49.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.650 --rc genhtml_branch_coverage=1 00:11:49.650 --rc genhtml_function_coverage=1 00:11:49.650 --rc genhtml_legend=1 00:11:49.650 --rc geninfo_all_blocks=1 00:11:49.650 --rc geninfo_unexecuted_blocks=1 00:11:49.650 00:11:49.650 ' 00:11:49.650 05:01:18 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:49.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.650 --rc genhtml_branch_coverage=1 00:11:49.650 --rc genhtml_function_coverage=1 00:11:49.650 --rc genhtml_legend=1 00:11:49.650 --rc geninfo_all_blocks=1 00:11:49.650 --rc geninfo_unexecuted_blocks=1 00:11:49.650 00:11:49.650 ' 00:11:49.650 05:01:18 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:49.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.650 --rc genhtml_branch_coverage=1 00:11:49.650 --rc genhtml_function_coverage=1 00:11:49.650 --rc genhtml_legend=1 00:11:49.650 --rc geninfo_all_blocks=1 00:11:49.650 --rc geninfo_unexecuted_blocks=1 00:11:49.650 00:11:49.650 ' 00:11:49.650 05:01:18 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:49.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.650 --rc genhtml_branch_coverage=1 00:11:49.650 --rc genhtml_function_coverage=1 00:11:49.650 --rc genhtml_legend=1 00:11:49.650 --rc geninfo_all_blocks=1 00:11:49.650 --rc geninfo_unexecuted_blocks=1 00:11:49.650 00:11:49.650 ' 00:11:49.650 05:01:18 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.650 05:01:18 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.650 05:01:18 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.650 05:01:18 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.650 05:01:18 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.650 05:01:18 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:49.650 05:01:18 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:49.650 05:01:18 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:49.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:50.172 Waiting for block devices as requested 00:11:50.172 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:50.172 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:50.433 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:50.433 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:55.729 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:55.729 05:01:24 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:11:55.990 05:01:25 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:11:55.990 05:01:25 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:11:55.990 05:01:25 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:11:55.990 05:01:25 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:11:55.990 05:01:25 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:11:55.990 05:01:25 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:55.990 05:01:25 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:56.248 No valid GPT data, bailing 00:11:56.249 05:01:25 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:56.249 05:01:25 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:11:56.249 05:01:25 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:11:56.249 05:01:25 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:11:56.249 05:01:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:56.249 05:01:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.249 05:01:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.249 ************************************ 00:11:56.249 START TEST xnvme_rpc 00:11:56.249 ************************************ 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80402 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80402 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80402 ']' 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.249 05:01:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.249 [2024-11-28 05:01:25.434353] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:11:56.249 [2024-11-28 05:01:25.434479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80402 ] 00:11:56.510 [2024-11-28 05:01:25.580323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.510 [2024-11-28 05:01:25.600867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.077 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.077 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:57.077 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.078 xnvme_bdev 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80402 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80402 ']' 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80402 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80402 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.078 killing process with pid 80402 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80402' 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80402 00:11:57.078 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80402 00:11:57.336 00:11:57.336 real 0m1.247s 00:11:57.336 user 0m1.261s 00:11:57.336 sys 0m0.333s 00:11:57.336 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.336 05:01:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.336 ************************************ 00:11:57.336 END TEST xnvme_rpc 00:11:57.336 ************************************ 00:11:57.596 05:01:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:57.596 05:01:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:57.596 05:01:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.596 05:01:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:57.596 ************************************ 00:11:57.596 START TEST xnvme_bdevperf 00:11:57.596 ************************************ 00:11:57.596 05:01:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:11:57.596 05:01:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:11:57.596 05:01:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:11:57.596 05:01:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:57.596 05:01:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:11:57.596 05:01:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:11:57.596 05:01:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:57.596 05:01:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:57.596 { 00:11:57.596 "subsystems": [ 00:11:57.596 { 00:11:57.596 "subsystem": "bdev", 00:11:57.596 "config": [ 00:11:57.596 { 00:11:57.596 "params": { 00:11:57.596 "io_mechanism": "libaio", 00:11:57.596 "conserve_cpu": false, 00:11:57.596 "filename": "/dev/nvme0n1", 00:11:57.596 "name": "xnvme_bdev" 00:11:57.596 }, 00:11:57.596 "method": "bdev_xnvme_create" 00:11:57.596 }, 00:11:57.596 { 00:11:57.596 "method": "bdev_wait_for_examine" 00:11:57.596 } 00:11:57.596 ] 00:11:57.596 } 00:11:57.596 ] 00:11:57.596 } 00:11:57.596 [2024-11-28 05:01:26.727766] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:11:57.596 [2024-11-28 05:01:26.727887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80460 ] 00:11:57.596 [2024-11-28 05:01:26.875472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.859 [2024-11-28 05:01:26.896071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.859 Running I/O for 5 seconds... 00:11:59.745 23684.00 IOPS, 92.52 MiB/s [2024-11-28T05:01:30.461Z] 24531.50 IOPS, 95.83 MiB/s [2024-11-28T05:01:31.058Z] 23917.33 IOPS, 93.43 MiB/s [2024-11-28T05:01:32.444Z] 23757.50 IOPS, 92.80 MiB/s [2024-11-28T05:01:32.444Z] 23467.80 IOPS, 91.67 MiB/s 00:12:03.160 Latency(us) 00:12:03.160 [2024-11-28T05:01:32.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.160 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:03.160 xnvme_bdev : 5.01 23430.58 91.53 0.00 0.00 2726.21 516.73 8922.98 00:12:03.160 [2024-11-28T05:01:32.444Z] =================================================================================================================== 00:12:03.160 [2024-11-28T05:01:32.444Z] Total : 23430.58 91.53 0.00 0.00 2726.21 516.73 8922.98 00:12:03.160 05:01:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:03.160 05:01:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:03.160 05:01:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:03.160 05:01:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:03.160 05:01:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:03.160 { 00:12:03.160 "subsystems": [ 00:12:03.160 { 00:12:03.160 "subsystem": "bdev", 00:12:03.160 "config": [ 00:12:03.160 { 00:12:03.160 "params": { 00:12:03.160 "io_mechanism": "libaio", 00:12:03.160 "conserve_cpu": false, 00:12:03.160 "filename": "/dev/nvme0n1", 00:12:03.160 "name": "xnvme_bdev" 00:12:03.160 }, 00:12:03.160 "method": "bdev_xnvme_create" 00:12:03.160 }, 00:12:03.160 { 00:12:03.160 "method": "bdev_wait_for_examine" 00:12:03.160 } 00:12:03.160 ] 00:12:03.160 } 00:12:03.160 ] 00:12:03.160 } 00:12:03.160 [2024-11-28 05:01:32.257941] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:03.160 [2024-11-28 05:01:32.258073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80529 ] 00:12:03.160 [2024-11-28 05:01:32.404235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.160 [2024-11-28 05:01:32.432754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.421 Running I/O for 5 seconds... 00:12:05.311 25142.00 IOPS, 98.21 MiB/s [2024-11-28T05:01:35.981Z] 27022.00 IOPS, 105.55 MiB/s [2024-11-28T05:01:36.926Z] 26813.67 IOPS, 104.74 MiB/s [2024-11-28T05:01:37.871Z] 27360.00 IOPS, 106.88 MiB/s 00:12:08.587 Latency(us) 00:12:08.587 [2024-11-28T05:01:37.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.587 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:08.587 xnvme_bdev : 5.00 28033.12 109.50 0.00 0.00 2277.94 365.49 7108.14 00:12:08.587 [2024-11-28T05:01:37.871Z] =================================================================================================================== 00:12:08.587 [2024-11-28T05:01:37.871Z] Total : 28033.12 109.50 0.00 0.00 2277.94 365.49 7108.14 00:12:08.587 00:12:08.587 real 0m11.104s 00:12:08.587 user 0m3.010s 00:12:08.587 sys 0m6.828s 00:12:08.587 ************************************ 00:12:08.587 END TEST xnvme_bdevperf 00:12:08.587 ************************************ 00:12:08.587 05:01:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.587 05:01:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:08.587 05:01:37 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:08.587 05:01:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:08.587 05:01:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.587 05:01:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:08.587 ************************************ 00:12:08.587 START TEST xnvme_fio_plugin 00:12:08.587 ************************************ 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:08.587 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:08.849 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:08.849 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:08.849 05:01:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:08.849 { 00:12:08.849 "subsystems": [ 00:12:08.849 { 00:12:08.849 "subsystem": "bdev", 00:12:08.849 "config": [ 00:12:08.849 { 00:12:08.849 "params": { 00:12:08.849 "io_mechanism": "libaio", 00:12:08.849 "conserve_cpu": false, 00:12:08.849 "filename": "/dev/nvme0n1", 00:12:08.849 "name": "xnvme_bdev" 00:12:08.849 }, 00:12:08.849 "method": "bdev_xnvme_create" 00:12:08.849 }, 00:12:08.849 { 00:12:08.849 "method": "bdev_wait_for_examine" 00:12:08.849 } 00:12:08.849 ] 00:12:08.849 } 00:12:08.849 ] 00:12:08.849 } 00:12:08.849 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:08.849 fio-3.35 00:12:08.849 Starting 1 thread 00:12:15.446 00:12:15.446 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80635: Thu Nov 28 05:01:43 2024 00:12:15.446 read: IOPS=28.6k, BW=112MiB/s (117MB/s)(559MiB/5001msec) 00:12:15.446 slat (usec): min=4, max=2403, avg=26.39, stdev=110.94 00:12:15.446 clat (usec): min=108, max=6266, avg=1518.77, stdev=610.98 00:12:15.446 lat (usec): min=191, max=6278, avg=1545.16, stdev=599.97 00:12:15.446 clat percentiles (usec): 00:12:15.446 | 1.00th=[ 293], 5.00th=[ 586], 10.00th=[ 742], 20.00th=[ 1004], 00:12:15.446 | 30.00th=[ 1205], 40.00th=[ 1352], 50.00th=[ 1516], 60.00th=[ 1647], 00:12:15.446 | 70.00th=[ 1795], 80.00th=[ 1958], 90.00th=[ 2245], 95.00th=[ 2540], 00:12:15.446 | 99.00th=[ 3294], 99.50th=[ 3621], 99.90th=[ 4490], 99.95th=[ 4817], 00:12:15.446 | 99.99th=[ 5211] 00:12:15.446 bw ( KiB/s): min=103424, max=124912, per=99.92%, avg=114380.56, stdev=6324.36, samples=9 00:12:15.446 iops : min=25856, max=31228, avg=28595.11, stdev=1581.10, samples=9 00:12:15.446 lat (usec) : 250=0.57%, 500=2.85%, 750=6.88%, 1000=9.51% 00:12:15.446 lat (msec) : 2=61.90%, 4=18.01%, 10=0.28% 00:12:15.446 cpu : usr=36.34%, sys=55.30%, ctx=10, majf=0, minf=1065 00:12:15.446 IO depths : 1=0.4%, 2=1.1%, 4=2.9%, 8=8.2%, 16=23.4%, 32=61.9%, >=64=2.0% 00:12:15.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.446 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:15.446 issued rwts: total=143112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.446 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:15.446 00:12:15.446 Run status group 0 (all jobs): 00:12:15.446 READ: bw=112MiB/s (117MB/s), 112MiB/s-112MiB/s (117MB/s-117MB/s), io=559MiB (586MB), run=5001-5001msec 00:12:15.446 ----------------------------------------------------- 00:12:15.446 Suppressions used: 00:12:15.446 count bytes template 00:12:15.446 1 11 /usr/src/fio/parse.c 00:12:15.446 1 8 libtcmalloc_minimal.so 00:12:15.446 1 904 libcrypto.so 00:12:15.446 ----------------------------------------------------- 00:12:15.446 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:15.446 05:01:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:15.446 { 00:12:15.446 "subsystems": [ 00:12:15.446 { 00:12:15.446 "subsystem": "bdev", 00:12:15.446 "config": [ 00:12:15.446 { 00:12:15.446 "params": { 00:12:15.446 "io_mechanism": "libaio", 00:12:15.446 "conserve_cpu": false, 00:12:15.446 "filename": "/dev/nvme0n1", 00:12:15.446 "name": "xnvme_bdev" 00:12:15.446 }, 00:12:15.446 "method": "bdev_xnvme_create" 00:12:15.446 }, 00:12:15.446 { 00:12:15.446 "method": "bdev_wait_for_examine" 00:12:15.446 } 00:12:15.446 ] 00:12:15.446 } 00:12:15.446 ] 00:12:15.446 } 00:12:15.446 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:15.446 fio-3.35 00:12:15.446 Starting 1 thread 00:12:20.739 00:12:20.739 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80721: Thu Nov 28 05:01:49 2024 00:12:20.739 write: IOPS=30.3k, BW=118MiB/s (124MB/s)(593MiB/5001msec); 0 zone resets 00:12:20.739 slat (usec): min=4, max=2249, avg=26.44, stdev=97.43 00:12:20.739 clat (usec): min=108, max=5107, avg=1384.69, stdev=609.67 00:12:20.739 lat (usec): min=191, max=5734, avg=1411.13, stdev=601.94 00:12:20.739 clat percentiles (usec): 00:12:20.739 | 1.00th=[ 269], 5.00th=[ 474], 10.00th=[ 635], 20.00th=[ 848], 00:12:20.739 | 30.00th=[ 1020], 40.00th=[ 1188], 50.00th=[ 1352], 60.00th=[ 1516], 00:12:20.739 | 70.00th=[ 1680], 80.00th=[ 1876], 90.00th=[ 2147], 95.00th=[ 2442], 00:12:20.739 | 99.00th=[ 3130], 99.50th=[ 3359], 99.90th=[ 4015], 99.95th=[ 4178], 00:12:20.739 | 99.99th=[ 4555] 00:12:20.739 bw ( KiB/s): min=110592, max=135264, per=99.28%, avg=120471.78, stdev=8663.77, samples=9 00:12:20.739 iops : min=27648, max=33816, avg=30117.89, stdev=2166.00, samples=9 00:12:20.739 lat (usec) : 250=0.79%, 500=4.89%, 750=9.34%, 1000=13.95% 00:12:20.739 lat (msec) : 2=56.59%, 4=14.35%, 10=0.10% 00:12:20.739 cpu : usr=32.34%, sys=57.72%, ctx=6, majf=0, minf=1066 00:12:20.739 IO depths : 1=0.3%, 2=0.9%, 4=2.7%, 8=8.3%, 16=24.0%, 32=61.8%, >=64=2.0% 00:12:20.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.739 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:20.739 issued rwts: total=0,151709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.739 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:20.739 00:12:20.739 Run status group 0 (all jobs): 00:12:20.739 WRITE: bw=118MiB/s (124MB/s), 118MiB/s-118MiB/s (124MB/s-124MB/s), io=593MiB (621MB), run=5001-5001msec 00:12:20.739 ----------------------------------------------------- 00:12:20.739 Suppressions used: 00:12:20.739 count bytes template 00:12:20.739 1 11 /usr/src/fio/parse.c 00:12:20.739 1 8 libtcmalloc_minimal.so 00:12:20.739 1 904 libcrypto.so 00:12:20.739 ----------------------------------------------------- 00:12:20.739 00:12:20.739 ************************************ 00:12:20.739 END TEST xnvme_fio_plugin 00:12:20.739 ************************************ 00:12:20.739 00:12:20.739 real 0m12.112s 00:12:20.739 user 0m4.597s 00:12:20.739 sys 0m6.232s 00:12:20.739 05:01:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.739 05:01:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:20.739 05:01:50 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:20.739 05:01:50 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:20.739 05:01:50 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:20.739 05:01:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:20.739 05:01:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:20.739 05:01:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.739 05:01:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.001 ************************************ 00:12:21.001 START TEST xnvme_rpc 00:12:21.001 ************************************ 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80801 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80801 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80801 ']' 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.001 05:01:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:21.001 [2024-11-28 05:01:50.128414] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:21.001 [2024-11-28 05:01:50.128575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80801 ] 00:12:21.001 [2024-11-28 05:01:50.277583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.264 [2024-11-28 05:01:50.306149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.836 05:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.836 05:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:21.836 05:01:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:21.836 05:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.836 05:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.836 xnvme_bdev 00:12:21.836 05:01:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.836 05:01:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:21.836 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.837 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:21.837 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.837 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.837 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:21.837 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:21.837 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:21.837 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.837 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.837 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:21.837 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80801 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80801 ']' 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80801 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80801 00:12:22.098 killing process with pid 80801 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80801' 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80801 00:12:22.098 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80801 00:12:22.359 ************************************ 00:12:22.359 END TEST xnvme_rpc 00:12:22.359 ************************************ 00:12:22.359 00:12:22.359 real 0m1.404s 00:12:22.359 user 0m1.467s 00:12:22.359 sys 0m0.420s 00:12:22.359 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.360 05:01:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.360 05:01:51 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:22.360 05:01:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.360 05:01:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.360 05:01:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.360 ************************************ 00:12:22.360 START TEST xnvme_bdevperf 00:12:22.360 ************************************ 00:12:22.360 05:01:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:22.360 05:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:22.360 05:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:22.360 05:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:22.360 05:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:22.360 05:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:22.360 05:01:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:22.360 05:01:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:22.360 { 00:12:22.360 "subsystems": [ 00:12:22.360 { 00:12:22.360 "subsystem": "bdev", 00:12:22.360 "config": [ 00:12:22.360 { 00:12:22.360 "params": { 00:12:22.360 "io_mechanism": "libaio", 00:12:22.360 "conserve_cpu": true, 00:12:22.360 "filename": "/dev/nvme0n1", 00:12:22.360 "name": "xnvme_bdev" 00:12:22.360 }, 00:12:22.360 "method": "bdev_xnvme_create" 00:12:22.360 }, 00:12:22.360 { 00:12:22.360 "method": "bdev_wait_for_examine" 00:12:22.360 } 00:12:22.360 ] 00:12:22.360 } 00:12:22.360 ] 00:12:22.360 } 00:12:22.360 [2024-11-28 05:01:51.571264] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:22.360 [2024-11-28 05:01:51.571617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80854 ] 00:12:22.621 [2024-11-28 05:01:51.720473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.621 [2024-11-28 05:01:51.748799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.621 Running I/O for 5 seconds... 00:12:24.952 25640.00 IOPS, 100.16 MiB/s [2024-11-28T05:01:55.182Z] 26493.00 IOPS, 103.49 MiB/s [2024-11-28T05:01:56.131Z] 26273.00 IOPS, 102.63 MiB/s [2024-11-28T05:01:57.076Z] 26408.75 IOPS, 103.16 MiB/s 00:12:27.792 Latency(us) 00:12:27.793 [2024-11-28T05:01:57.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.793 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:27.793 xnvme_bdev : 5.00 26289.87 102.69 0.00 0.00 2429.30 231.58 9326.28 00:12:27.793 [2024-11-28T05:01:57.077Z] =================================================================================================================== 00:12:27.793 [2024-11-28T05:01:57.077Z] Total : 26289.87 102.69 0.00 0.00 2429.30 231.58 9326.28 00:12:27.793 05:01:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:27.793 05:01:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:27.793 05:01:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:27.793 05:01:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:27.793 05:01:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:28.054 { 00:12:28.054 "subsystems": [ 00:12:28.054 { 00:12:28.054 "subsystem": "bdev", 00:12:28.054 "config": [ 00:12:28.054 { 00:12:28.054 "params": { 00:12:28.054 "io_mechanism": "libaio", 00:12:28.054 "conserve_cpu": true, 00:12:28.054 "filename": "/dev/nvme0n1", 00:12:28.054 "name": "xnvme_bdev" 00:12:28.054 }, 00:12:28.054 "method": "bdev_xnvme_create" 00:12:28.054 }, 00:12:28.054 { 00:12:28.054 "method": "bdev_wait_for_examine" 00:12:28.054 } 00:12:28.054 ] 00:12:28.054 } 00:12:28.054 ] 00:12:28.054 } 00:12:28.054 [2024-11-28 05:01:57.135935] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:28.054 [2024-11-28 05:01:57.136327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80918 ] 00:12:28.054 [2024-11-28 05:01:57.282194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.054 [2024-11-28 05:01:57.310508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.316 Running I/O for 5 seconds... 00:12:30.202 29641.00 IOPS, 115.79 MiB/s [2024-11-28T05:02:00.870Z] 29450.00 IOPS, 115.04 MiB/s [2024-11-28T05:02:01.442Z] 28502.33 IOPS, 111.34 MiB/s [2024-11-28T05:02:02.454Z] 28649.75 IOPS, 111.91 MiB/s 00:12:33.170 Latency(us) 00:12:33.170 [2024-11-28T05:02:02.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.170 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:33.170 xnvme_bdev : 5.00 27845.10 108.77 0.00 0.00 2293.31 397.00 8519.68 00:12:33.170 [2024-11-28T05:02:02.454Z] =================================================================================================================== 00:12:33.170 [2024-11-28T05:02:02.454Z] Total : 27845.10 108.77 0.00 0.00 2293.31 397.00 8519.68 00:12:33.432 ************************************ 00:12:33.432 END TEST xnvme_bdevperf 00:12:33.432 ************************************ 00:12:33.432 00:12:33.432 real 0m11.123s 00:12:33.432 user 0m3.228s 00:12:33.432 sys 0m6.566s 00:12:33.432 05:02:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.432 05:02:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:33.432 05:02:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:33.432 05:02:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:33.432 05:02:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.432 05:02:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.432 ************************************ 00:12:33.432 START TEST xnvme_fio_plugin 00:12:33.432 ************************************ 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:33.432 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:33.694 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:33.694 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:33.694 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:33.694 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:33.694 05:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.694 { 00:12:33.694 "subsystems": [ 00:12:33.694 { 00:12:33.694 "subsystem": "bdev", 00:12:33.694 "config": [ 00:12:33.694 { 00:12:33.694 "params": { 00:12:33.694 "io_mechanism": "libaio", 00:12:33.694 "conserve_cpu": true, 00:12:33.694 "filename": "/dev/nvme0n1", 00:12:33.694 "name": "xnvme_bdev" 00:12:33.694 }, 00:12:33.694 "method": "bdev_xnvme_create" 00:12:33.694 }, 00:12:33.694 { 00:12:33.694 "method": "bdev_wait_for_examine" 00:12:33.694 } 00:12:33.694 ] 00:12:33.694 } 00:12:33.694 ] 00:12:33.694 } 00:12:33.694 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:33.694 fio-3.35 00:12:33.694 Starting 1 thread 00:12:40.285 00:12:40.285 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81026: Thu Nov 28 05:02:08 2024 00:12:40.285 read: IOPS=28.3k, BW=111MiB/s (116MB/s)(553MiB/5001msec) 00:12:40.285 slat (usec): min=4, max=2241, avg=28.10, stdev=112.41 00:12:40.285 clat (usec): min=44, max=5705, avg=1496.83, stdev=636.60 00:12:40.285 lat (usec): min=180, max=5890, avg=1524.93, stdev=626.64 00:12:40.285 clat percentiles (usec): 00:12:40.285 | 1.00th=[ 277], 5.00th=[ 529], 10.00th=[ 693], 20.00th=[ 947], 00:12:40.285 | 30.00th=[ 1139], 40.00th=[ 1303], 50.00th=[ 1467], 60.00th=[ 1631], 00:12:40.285 | 70.00th=[ 1795], 80.00th=[ 1991], 90.00th=[ 2278], 95.00th=[ 2606], 00:12:40.285 | 99.00th=[ 3261], 99.50th=[ 3523], 99.90th=[ 4228], 99.95th=[ 4555], 00:12:40.285 | 99.99th=[ 5276] 00:12:40.285 bw ( KiB/s): min=104160, max=116288, per=98.86%, avg=111864.00, stdev=3642.30, samples=9 00:12:40.285 iops : min=26040, max=29072, avg=27966.00, stdev=910.57, samples=9 00:12:40.285 lat (usec) : 50=0.01%, 250=0.65%, 500=3.74%, 750=7.65%, 1000=10.63% 00:12:40.285 lat (msec) : 2=57.71%, 4=19.49%, 10=0.13% 00:12:40.285 cpu : usr=33.42%, sys=57.66%, ctx=12, majf=0, minf=1065 00:12:40.285 IO depths : 1=0.3%, 2=0.9%, 4=2.6%, 8=8.1%, 16=23.9%, 32=62.1%, >=64=2.1% 00:12:40.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.285 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:40.285 issued rwts: total=141469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.285 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.285 00:12:40.285 Run status group 0 (all jobs): 00:12:40.285 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=553MiB (579MB), run=5001-5001msec 00:12:40.285 ----------------------------------------------------- 00:12:40.285 Suppressions used: 00:12:40.285 count bytes template 00:12:40.285 1 11 /usr/src/fio/parse.c 00:12:40.285 1 8 libtcmalloc_minimal.so 00:12:40.285 1 904 libcrypto.so 00:12:40.285 ----------------------------------------------------- 00:12:40.285 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:40.285 05:02:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:40.285 { 00:12:40.285 "subsystems": [ 00:12:40.285 { 00:12:40.285 "subsystem": "bdev", 00:12:40.285 "config": [ 00:12:40.285 { 00:12:40.285 "params": { 00:12:40.285 "io_mechanism": "libaio", 00:12:40.285 "conserve_cpu": true, 00:12:40.285 "filename": "/dev/nvme0n1", 00:12:40.285 "name": "xnvme_bdev" 00:12:40.285 }, 00:12:40.285 "method": "bdev_xnvme_create" 00:12:40.285 }, 00:12:40.285 { 00:12:40.285 "method": "bdev_wait_for_examine" 00:12:40.285 } 00:12:40.285 ] 00:12:40.285 } 00:12:40.285 ] 00:12:40.285 } 00:12:40.285 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:40.285 fio-3.35 00:12:40.285 Starting 1 thread 00:12:45.573 00:12:45.574 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81111: Thu Nov 28 05:02:14 2024 00:12:45.574 write: IOPS=31.1k, BW=122MiB/s (127MB/s)(608MiB/5001msec); 0 zone resets 00:12:45.574 slat (usec): min=4, max=1862, avg=24.75, stdev=91.87 00:12:45.574 clat (usec): min=108, max=6801, avg=1370.37, stdev=601.07 00:12:45.574 lat (usec): min=180, max=6838, avg=1395.12, stdev=593.97 00:12:45.574 clat percentiles (usec): 00:12:45.574 | 1.00th=[ 269], 5.00th=[ 469], 10.00th=[ 635], 20.00th=[ 865], 00:12:45.574 | 30.00th=[ 1037], 40.00th=[ 1188], 50.00th=[ 1336], 60.00th=[ 1483], 00:12:45.574 | 70.00th=[ 1631], 80.00th=[ 1811], 90.00th=[ 2089], 95.00th=[ 2376], 00:12:45.574 | 99.00th=[ 3163], 99.50th=[ 3490], 99.90th=[ 4228], 99.95th=[ 4686], 00:12:45.574 | 99.99th=[ 6718] 00:12:45.574 bw ( KiB/s): min=117752, max=141104, per=100.00%, avg=126298.56, stdev=6588.74, samples=9 00:12:45.574 iops : min=29438, max=35276, avg=31574.56, stdev=1647.16, samples=9 00:12:45.574 lat (usec) : 250=0.78%, 500=4.98%, 750=8.59%, 1000=13.37% 00:12:45.574 lat (msec) : 2=59.55%, 4=12.57%, 10=0.17% 00:12:45.574 cpu : usr=35.18%, sys=54.78%, ctx=17, majf=0, minf=1066 00:12:45.574 IO depths : 1=0.4%, 2=1.0%, 4=3.1%, 8=8.6%, 16=23.4%, 32=61.4%, >=64=2.0% 00:12:45.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.574 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:45.574 issued rwts: total=0,155583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.574 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:45.574 00:12:45.574 Run status group 0 (all jobs): 00:12:45.574 WRITE: bw=122MiB/s (127MB/s), 122MiB/s-122MiB/s (127MB/s-127MB/s), io=608MiB (637MB), run=5001-5001msec 00:12:45.574 ----------------------------------------------------- 00:12:45.574 Suppressions used: 00:12:45.574 count bytes template 00:12:45.574 1 11 /usr/src/fio/parse.c 00:12:45.574 1 8 libtcmalloc_minimal.so 00:12:45.574 1 904 libcrypto.so 00:12:45.574 ----------------------------------------------------- 00:12:45.574 00:12:45.574 ************************************ 00:12:45.574 END TEST xnvme_fio_plugin 00:12:45.574 ************************************ 00:12:45.574 00:12:45.574 real 0m12.094s 00:12:45.574 user 0m4.570s 00:12:45.574 sys 0m6.196s 00:12:45.574 05:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.574 05:02:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:45.574 05:02:14 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:45.574 05:02:14 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:45.574 05:02:14 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:45.574 05:02:14 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:45.574 05:02:14 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:45.574 05:02:14 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:45.574 05:02:14 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:45.574 05:02:14 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:45.574 05:02:14 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:45.574 05:02:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:45.574 05:02:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.574 05:02:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.835 ************************************ 00:12:45.835 START TEST xnvme_rpc 00:12:45.835 ************************************ 00:12:45.835 05:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:45.835 05:02:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:45.835 05:02:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:45.835 05:02:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:45.835 05:02:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:45.835 05:02:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81193 00:12:45.835 05:02:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81193 00:12:45.835 05:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81193 ']' 00:12:45.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.836 05:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.836 05:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.836 05:02:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:45.836 05:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.836 05:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.836 05:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.836 [2024-11-28 05:02:14.954465] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:45.836 [2024-11-28 05:02:14.955287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81193 ] 00:12:45.836 [2024-11-28 05:02:15.108413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.097 [2024-11-28 05:02:15.138220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.670 xnvme_bdev 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:46.670 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.671 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.671 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.671 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:46.671 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:46.671 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.671 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.932 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.932 05:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81193 00:12:46.932 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81193 ']' 00:12:46.932 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81193 00:12:46.932 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:46.932 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.933 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81193 00:12:46.933 killing process with pid 81193 00:12:46.933 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.933 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.933 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81193' 00:12:46.933 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81193 00:12:46.933 05:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81193 00:12:47.195 00:12:47.195 real 0m1.456s 00:12:47.195 user 0m1.499s 00:12:47.195 sys 0m0.453s 00:12:47.195 05:02:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.195 05:02:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.195 ************************************ 00:12:47.195 END TEST xnvme_rpc 00:12:47.195 ************************************ 00:12:47.195 05:02:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:47.195 05:02:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:47.195 05:02:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.195 05:02:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.195 ************************************ 00:12:47.195 START TEST xnvme_bdevperf 00:12:47.195 ************************************ 00:12:47.195 05:02:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:47.195 05:02:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:47.195 05:02:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:12:47.195 05:02:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:47.195 05:02:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:47.195 05:02:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:47.195 05:02:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:47.195 05:02:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:47.195 { 00:12:47.195 "subsystems": [ 00:12:47.195 { 00:12:47.195 "subsystem": "bdev", 00:12:47.195 "config": [ 00:12:47.195 { 00:12:47.195 "params": { 00:12:47.195 "io_mechanism": "io_uring", 00:12:47.195 "conserve_cpu": false, 00:12:47.195 "filename": "/dev/nvme0n1", 00:12:47.195 "name": "xnvme_bdev" 00:12:47.195 }, 00:12:47.195 "method": "bdev_xnvme_create" 00:12:47.195 }, 00:12:47.195 { 00:12:47.195 "method": "bdev_wait_for_examine" 00:12:47.195 } 00:12:47.195 ] 00:12:47.195 } 00:12:47.195 ] 00:12:47.195 } 00:12:47.195 [2024-11-28 05:02:16.463125] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:47.195 [2024-11-28 05:02:16.463303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81245 ] 00:12:47.457 [2024-11-28 05:02:16.612294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.457 [2024-11-28 05:02:16.640733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.720 Running I/O for 5 seconds... 00:12:49.609 32128.00 IOPS, 125.50 MiB/s [2024-11-28T05:02:19.839Z] 31744.00 IOPS, 124.00 MiB/s [2024-11-28T05:02:20.783Z] 31872.00 IOPS, 124.50 MiB/s [2024-11-28T05:02:22.170Z] 31920.00 IOPS, 124.69 MiB/s [2024-11-28T05:02:22.170Z] 31923.20 IOPS, 124.70 MiB/s 00:12:52.886 Latency(us) 00:12:52.886 [2024-11-28T05:02:22.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.886 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:52.886 xnvme_bdev : 5.00 31924.02 124.70 0.00 0.00 2001.32 1058.66 4612.73 00:12:52.886 [2024-11-28T05:02:22.170Z] =================================================================================================================== 00:12:52.886 [2024-11-28T05:02:22.170Z] Total : 31924.02 124.70 0.00 0.00 2001.32 1058.66 4612.73 00:12:52.886 05:02:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:52.886 05:02:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:52.886 05:02:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:52.886 05:02:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:52.886 05:02:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:52.886 { 00:12:52.886 "subsystems": [ 00:12:52.886 { 00:12:52.886 "subsystem": "bdev", 00:12:52.886 "config": [ 00:12:52.886 { 00:12:52.886 "params": { 00:12:52.886 "io_mechanism": "io_uring", 00:12:52.886 "conserve_cpu": false, 00:12:52.886 "filename": "/dev/nvme0n1", 00:12:52.886 "name": "xnvme_bdev" 00:12:52.886 }, 00:12:52.886 "method": "bdev_xnvme_create" 00:12:52.886 }, 00:12:52.886 { 00:12:52.886 "method": "bdev_wait_for_examine" 00:12:52.886 } 00:12:52.886 ] 00:12:52.886 } 00:12:52.886 ] 00:12:52.886 } 00:12:52.886 [2024-11-28 05:02:22.085642] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:52.886 [2024-11-28 05:02:22.085778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81316 ] 00:12:53.148 [2024-11-28 05:02:22.235729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.148 [2024-11-28 05:02:22.268271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.148 Running I/O for 5 seconds... 00:12:55.481 33836.00 IOPS, 132.17 MiB/s [2024-11-28T05:02:25.707Z] 33588.00 IOPS, 131.20 MiB/s [2024-11-28T05:02:26.649Z] 33261.00 IOPS, 129.93 MiB/s [2024-11-28T05:02:27.591Z] 33505.75 IOPS, 130.88 MiB/s [2024-11-28T05:02:27.591Z] 33574.60 IOPS, 131.15 MiB/s 00:12:58.307 Latency(us) 00:12:58.307 [2024-11-28T05:02:27.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.307 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:58.307 xnvme_bdev : 5.01 33539.06 131.01 0.00 0.00 1904.51 671.11 5041.23 00:12:58.307 [2024-11-28T05:02:27.591Z] =================================================================================================================== 00:12:58.307 [2024-11-28T05:02:27.591Z] Total : 33539.06 131.01 0.00 0.00 1904.51 671.11 5041.23 00:12:58.307 00:12:58.307 real 0m11.173s 00:12:58.307 user 0m4.759s 00:12:58.307 sys 0m6.179s 00:12:58.307 ************************************ 00:12:58.307 END TEST xnvme_bdevperf 00:12:58.307 ************************************ 00:12:58.307 05:02:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.307 05:02:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:58.569 05:02:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:58.569 05:02:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:58.569 05:02:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.569 05:02:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.569 ************************************ 00:12:58.569 START TEST xnvme_fio_plugin 00:12:58.569 ************************************ 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:58.569 05:02:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:58.569 { 00:12:58.569 "subsystems": [ 00:12:58.569 { 00:12:58.569 "subsystem": "bdev", 00:12:58.569 "config": [ 00:12:58.569 { 00:12:58.569 "params": { 00:12:58.569 "io_mechanism": "io_uring", 00:12:58.569 "conserve_cpu": false, 00:12:58.569 "filename": "/dev/nvme0n1", 00:12:58.569 "name": "xnvme_bdev" 00:12:58.569 }, 00:12:58.569 "method": "bdev_xnvme_create" 00:12:58.569 }, 00:12:58.569 { 00:12:58.570 "method": "bdev_wait_for_examine" 00:12:58.570 } 00:12:58.570 ] 00:12:58.570 } 00:12:58.570 ] 00:12:58.570 } 00:12:58.570 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:58.570 fio-3.35 00:12:58.570 Starting 1 thread 00:13:05.163 00:13:05.163 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81424: Thu Nov 28 05:02:33 2024 00:13:05.163 read: IOPS=31.9k, BW=125MiB/s (131MB/s)(623MiB/5001msec) 00:13:05.163 slat (nsec): min=2872, max=54263, avg=3168.41, stdev=1242.67 00:13:05.163 clat (usec): min=1018, max=4617, avg=1880.30, stdev=318.12 00:13:05.163 lat (usec): min=1021, max=4639, avg=1883.47, stdev=318.22 00:13:05.163 clat percentiles (usec): 00:13:05.163 | 1.00th=[ 1205], 5.00th=[ 1369], 10.00th=[ 1467], 20.00th=[ 1614], 00:13:05.163 | 30.00th=[ 1713], 40.00th=[ 1795], 50.00th=[ 1876], 60.00th=[ 1958], 00:13:05.163 | 70.00th=[ 2040], 80.00th=[ 2114], 90.00th=[ 2278], 95.00th=[ 2376], 00:13:05.163 | 99.00th=[ 2737], 99.50th=[ 2868], 99.90th=[ 3294], 99.95th=[ 3556], 00:13:05.163 | 99.99th=[ 4555] 00:13:05.164 bw ( KiB/s): min=125440, max=132096, per=100.00%, avg=128225.78, stdev=1901.51, samples=9 00:13:05.164 iops : min=31360, max=33024, avg=32056.44, stdev=475.38, samples=9 00:13:05.164 lat (msec) : 2=65.49%, 4=34.46%, 10=0.04% 00:13:05.164 cpu : usr=32.18%, sys=66.90%, ctx=9, majf=0, minf=1063 00:13:05.164 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:05.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.164 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:05.164 issued rwts: total=159614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.164 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:05.164 00:13:05.164 Run status group 0 (all jobs): 00:13:05.164 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=623MiB (654MB), run=5001-5001msec 00:13:05.164 ----------------------------------------------------- 00:13:05.164 Suppressions used: 00:13:05.164 count bytes template 00:13:05.164 1 11 /usr/src/fio/parse.c 00:13:05.164 1 8 libtcmalloc_minimal.so 00:13:05.164 1 904 libcrypto.so 00:13:05.164 ----------------------------------------------------- 00:13:05.164 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:05.164 05:02:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:05.164 { 00:13:05.164 "subsystems": [ 00:13:05.164 { 00:13:05.164 "subsystem": "bdev", 00:13:05.164 "config": [ 00:13:05.164 { 00:13:05.164 "params": { 00:13:05.164 "io_mechanism": "io_uring", 00:13:05.164 "conserve_cpu": false, 00:13:05.164 "filename": "/dev/nvme0n1", 00:13:05.164 "name": "xnvme_bdev" 00:13:05.164 }, 00:13:05.164 "method": "bdev_xnvme_create" 00:13:05.164 }, 00:13:05.164 { 00:13:05.164 "method": "bdev_wait_for_examine" 00:13:05.164 } 00:13:05.164 ] 00:13:05.164 } 00:13:05.164 ] 00:13:05.164 } 00:13:05.164 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:05.164 fio-3.35 00:13:05.164 Starting 1 thread 00:13:10.453 00:13:10.453 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81499: Thu Nov 28 05:02:39 2024 00:13:10.453 write: IOPS=33.2k, BW=130MiB/s (136MB/s)(649MiB/5002msec); 0 zone resets 00:13:10.453 slat (usec): min=2, max=501, avg= 3.41, stdev= 2.91 00:13:10.453 clat (usec): min=579, max=5554, avg=1793.25, stdev=337.77 00:13:10.453 lat (usec): min=588, max=5558, avg=1796.66, stdev=338.04 00:13:10.453 clat percentiles (usec): 00:13:10.453 | 1.00th=[ 1221], 5.00th=[ 1336], 10.00th=[ 1401], 20.00th=[ 1516], 00:13:10.453 | 30.00th=[ 1598], 40.00th=[ 1680], 50.00th=[ 1745], 60.00th=[ 1844], 00:13:10.453 | 70.00th=[ 1926], 80.00th=[ 2040], 90.00th=[ 2212], 95.00th=[ 2376], 00:13:10.453 | 99.00th=[ 2802], 99.50th=[ 2999], 99.90th=[ 3621], 99.95th=[ 3884], 00:13:10.453 | 99.99th=[ 5473] 00:13:10.453 bw ( KiB/s): min=126560, max=138528, per=100.00%, avg=132910.67, stdev=4118.12, samples=9 00:13:10.453 iops : min=31640, max=34632, avg=33227.67, stdev=1029.53, samples=9 00:13:10.453 lat (usec) : 750=0.01%, 1000=0.06% 00:13:10.453 lat (msec) : 2=76.48%, 4=23.41%, 10=0.04% 00:13:10.453 cpu : usr=32.73%, sys=65.61%, ctx=26, majf=0, minf=1064 00:13:10.453 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.3%, >=64=1.6% 00:13:10.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.453 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:10.453 issued rwts: total=0,166134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.453 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:10.453 00:13:10.453 Run status group 0 (all jobs): 00:13:10.453 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=649MiB (680MB), run=5002-5002msec 00:13:10.453 ----------------------------------------------------- 00:13:10.453 Suppressions used: 00:13:10.453 count bytes template 00:13:10.453 1 11 /usr/src/fio/parse.c 00:13:10.453 1 8 libtcmalloc_minimal.so 00:13:10.453 1 904 libcrypto.so 00:13:10.453 ----------------------------------------------------- 00:13:10.453 00:13:10.453 00:13:10.453 real 0m12.032s 00:13:10.453 user 0m4.435s 00:13:10.453 sys 0m7.158s 00:13:10.453 05:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.453 ************************************ 00:13:10.453 END TEST xnvme_fio_plugin 00:13:10.453 ************************************ 00:13:10.453 05:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:10.453 05:02:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:10.453 05:02:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:10.453 05:02:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:10.453 05:02:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:10.453 05:02:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.453 05:02:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.453 05:02:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.715 ************************************ 00:13:10.715 START TEST xnvme_rpc 00:13:10.715 ************************************ 00:13:10.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81584 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81584 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81584 ']' 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.715 05:02:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.715 [2024-11-28 05:02:39.824473] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:10.715 [2024-11-28 05:02:39.824621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81584 ] 00:13:10.715 [2024-11-28 05:02:39.972928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.977 [2024-11-28 05:02:40.004707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.549 xnvme_bdev 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.549 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81584 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81584 ']' 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81584 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.550 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81584 00:13:11.812 killing process with pid 81584 00:13:11.812 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.812 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.812 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81584' 00:13:11.812 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81584 00:13:11.812 05:02:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81584 00:13:12.073 00:13:12.073 real 0m1.398s 00:13:12.073 user 0m1.502s 00:13:12.073 sys 0m0.378s 00:13:12.073 05:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.073 05:02:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.073 ************************************ 00:13:12.073 END TEST xnvme_rpc 00:13:12.073 ************************************ 00:13:12.073 05:02:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:12.073 05:02:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:12.073 05:02:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.073 05:02:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.073 ************************************ 00:13:12.073 START TEST xnvme_bdevperf 00:13:12.073 ************************************ 00:13:12.074 05:02:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:12.074 05:02:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:12.074 05:02:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:12.074 05:02:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:12.074 05:02:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:12.074 05:02:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:12.074 05:02:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:12.074 05:02:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:12.074 { 00:13:12.074 "subsystems": [ 00:13:12.074 { 00:13:12.074 "subsystem": "bdev", 00:13:12.074 "config": [ 00:13:12.074 { 00:13:12.074 "params": { 00:13:12.074 "io_mechanism": "io_uring", 00:13:12.074 "conserve_cpu": true, 00:13:12.074 "filename": "/dev/nvme0n1", 00:13:12.074 "name": "xnvme_bdev" 00:13:12.074 }, 00:13:12.074 "method": "bdev_xnvme_create" 00:13:12.074 }, 00:13:12.074 { 00:13:12.074 "method": "bdev_wait_for_examine" 00:13:12.074 } 00:13:12.074 ] 00:13:12.074 } 00:13:12.074 ] 00:13:12.074 } 00:13:12.074 [2024-11-28 05:02:41.272974] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:12.074 [2024-11-28 05:02:41.273296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81637 ] 00:13:12.348 [2024-11-28 05:02:41.423158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.348 [2024-11-28 05:02:41.453576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.348 Running I/O for 5 seconds... 00:13:14.300 30590.00 IOPS, 119.49 MiB/s [2024-11-28T05:02:44.970Z] 30719.00 IOPS, 120.00 MiB/s [2024-11-28T05:02:45.917Z] 30506.00 IOPS, 119.16 MiB/s [2024-11-28T05:02:46.862Z] 30735.50 IOPS, 120.06 MiB/s 00:13:17.578 Latency(us) 00:13:17.578 [2024-11-28T05:02:46.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.578 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:17.579 xnvme_bdev : 5.00 30708.20 119.95 0.00 0.00 2080.45 1064.96 9578.34 00:13:17.579 [2024-11-28T05:02:46.863Z] =================================================================================================================== 00:13:17.579 [2024-11-28T05:02:46.863Z] Total : 30708.20 119.95 0.00 0.00 2080.45 1064.96 9578.34 00:13:17.579 05:02:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:17.579 05:02:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:17.579 05:02:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:17.579 05:02:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:17.579 05:02:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:17.579 { 00:13:17.579 "subsystems": [ 00:13:17.579 { 00:13:17.579 "subsystem": "bdev", 00:13:17.579 "config": [ 00:13:17.579 { 00:13:17.579 "params": { 00:13:17.579 "io_mechanism": "io_uring", 00:13:17.579 "conserve_cpu": true, 00:13:17.579 "filename": "/dev/nvme0n1", 00:13:17.579 "name": "xnvme_bdev" 00:13:17.579 }, 00:13:17.579 "method": "bdev_xnvme_create" 00:13:17.579 }, 00:13:17.579 { 00:13:17.579 "method": "bdev_wait_for_examine" 00:13:17.579 } 00:13:17.579 ] 00:13:17.579 } 00:13:17.579 ] 00:13:17.579 } 00:13:17.579 [2024-11-28 05:02:46.803557] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:17.579 [2024-11-28 05:02:46.803697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81708 ] 00:13:17.841 [2024-11-28 05:02:46.949517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.841 [2024-11-28 05:02:46.979663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.841 Running I/O for 5 seconds... 00:13:19.805 33463.00 IOPS, 130.71 MiB/s [2024-11-28T05:02:50.475Z] 34251.00 IOPS, 133.79 MiB/s [2024-11-28T05:02:51.421Z] 33646.33 IOPS, 131.43 MiB/s [2024-11-28T05:02:52.365Z] 33582.25 IOPS, 131.18 MiB/s [2024-11-28T05:02:52.365Z] 33795.60 IOPS, 132.01 MiB/s 00:13:23.081 Latency(us) 00:13:23.081 [2024-11-28T05:02:52.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.081 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:23.081 xnvme_bdev : 5.01 33761.89 131.88 0.00 0.00 1891.75 938.93 7158.55 00:13:23.081 [2024-11-28T05:02:52.365Z] =================================================================================================================== 00:13:23.081 [2024-11-28T05:02:52.365Z] Total : 33761.89 131.88 0.00 0.00 1891.75 938.93 7158.55 00:13:23.081 ************************************ 00:13:23.081 END TEST xnvme_bdevperf 00:13:23.081 ************************************ 00:13:23.081 00:13:23.081 real 0m11.060s 00:13:23.081 user 0m8.291s 00:13:23.081 sys 0m2.307s 00:13:23.081 05:02:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.081 05:02:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:23.081 05:02:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:23.081 05:02:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:23.081 05:02:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.081 05:02:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.081 ************************************ 00:13:23.081 START TEST xnvme_fio_plugin 00:13:23.081 ************************************ 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:23.081 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:23.343 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:23.343 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:23.343 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:23.343 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:23.343 05:02:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:23.343 { 00:13:23.343 "subsystems": [ 00:13:23.343 { 00:13:23.343 "subsystem": "bdev", 00:13:23.343 "config": [ 00:13:23.343 { 00:13:23.343 "params": { 00:13:23.343 "io_mechanism": "io_uring", 00:13:23.343 "conserve_cpu": true, 00:13:23.343 "filename": "/dev/nvme0n1", 00:13:23.343 "name": "xnvme_bdev" 00:13:23.343 }, 00:13:23.343 "method": "bdev_xnvme_create" 00:13:23.343 }, 00:13:23.343 { 00:13:23.343 "method": "bdev_wait_for_examine" 00:13:23.343 } 00:13:23.343 ] 00:13:23.343 } 00:13:23.343 ] 00:13:23.343 } 00:13:23.343 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:23.343 fio-3.35 00:13:23.343 Starting 1 thread 00:13:29.938 00:13:29.938 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81815: Thu Nov 28 05:02:57 2024 00:13:29.938 read: IOPS=31.4k, BW=123MiB/s (129MB/s)(613MiB/5001msec) 00:13:29.938 slat (nsec): min=2903, max=92831, avg=3412.57, stdev=1702.08 00:13:29.938 clat (usec): min=1079, max=7524, avg=1903.14, stdev=302.40 00:13:29.938 lat (usec): min=1082, max=7531, avg=1906.55, stdev=302.73 00:13:29.938 clat percentiles (usec): 00:13:29.938 | 1.00th=[ 1352], 5.00th=[ 1483], 10.00th=[ 1565], 20.00th=[ 1663], 00:13:29.938 | 30.00th=[ 1745], 40.00th=[ 1811], 50.00th=[ 1893], 60.00th=[ 1958], 00:13:29.938 | 70.00th=[ 2024], 80.00th=[ 2114], 90.00th=[ 2245], 95.00th=[ 2376], 00:13:29.938 | 99.00th=[ 2704], 99.50th=[ 2835], 99.90th=[ 3359], 99.95th=[ 4490], 00:13:29.938 | 99.99th=[ 7504] 00:13:29.938 bw ( KiB/s): min=122368, max=131072, per=100.00%, avg=125724.44, stdev=2953.57, samples=9 00:13:29.938 iops : min=30592, max=32768, avg=31431.11, stdev=738.39, samples=9 00:13:29.938 lat (msec) : 2=66.61%, 4=33.31%, 10=0.08% 00:13:29.938 cpu : usr=72.90%, sys=23.82%, ctx=9, majf=0, minf=1063 00:13:29.938 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:29.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.938 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:29.938 issued rwts: total=156928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.938 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.938 00:13:29.938 Run status group 0 (all jobs): 00:13:29.938 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=613MiB (643MB), run=5001-5001msec 00:13:29.938 ----------------------------------------------------- 00:13:29.938 Suppressions used: 00:13:29.938 count bytes template 00:13:29.938 1 11 /usr/src/fio/parse.c 00:13:29.938 1 8 libtcmalloc_minimal.so 00:13:29.938 1 904 libcrypto.so 00:13:29.938 ----------------------------------------------------- 00:13:29.938 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:29.938 05:02:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:29.938 { 00:13:29.938 "subsystems": [ 00:13:29.938 { 00:13:29.938 "subsystem": "bdev", 00:13:29.938 "config": [ 00:13:29.938 { 00:13:29.938 "params": { 00:13:29.938 "io_mechanism": "io_uring", 00:13:29.938 "conserve_cpu": true, 00:13:29.938 "filename": "/dev/nvme0n1", 00:13:29.938 "name": "xnvme_bdev" 00:13:29.938 }, 00:13:29.938 "method": "bdev_xnvme_create" 00:13:29.938 }, 00:13:29.938 { 00:13:29.939 "method": "bdev_wait_for_examine" 00:13:29.939 } 00:13:29.939 ] 00:13:29.939 } 00:13:29.939 ] 00:13:29.939 } 00:13:29.939 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:29.939 fio-3.35 00:13:29.939 Starting 1 thread 00:13:35.234 00:13:35.234 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81891: Thu Nov 28 05:03:03 2024 00:13:35.234 write: IOPS=32.6k, BW=127MiB/s (134MB/s)(637MiB/5002msec); 0 zone resets 00:13:35.234 slat (usec): min=2, max=147, avg= 3.58, stdev= 1.64 00:13:35.234 clat (usec): min=1020, max=4870, avg=1821.12, stdev=278.23 00:13:35.234 lat (usec): min=1023, max=4876, avg=1824.69, stdev=278.46 00:13:35.234 clat percentiles (usec): 00:13:35.234 | 1.00th=[ 1303], 5.00th=[ 1418], 10.00th=[ 1483], 20.00th=[ 1582], 00:13:35.234 | 30.00th=[ 1663], 40.00th=[ 1729], 50.00th=[ 1795], 60.00th=[ 1876], 00:13:35.234 | 70.00th=[ 1942], 80.00th=[ 2040], 90.00th=[ 2180], 95.00th=[ 2311], 00:13:35.234 | 99.00th=[ 2638], 99.50th=[ 2802], 99.90th=[ 3097], 99.95th=[ 3130], 00:13:35.234 | 99.99th=[ 3556] 00:13:35.234 bw ( KiB/s): min=123384, max=134600, per=100.00%, avg=130669.33, stdev=3582.03, samples=9 00:13:35.234 iops : min=30846, max=33650, avg=32667.33, stdev=895.51, samples=9 00:13:35.234 lat (msec) : 2=75.90%, 4=24.10%, 10=0.01% 00:13:35.234 cpu : usr=74.37%, sys=22.46%, ctx=12, majf=0, minf=1064 00:13:35.234 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:35.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.234 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:35.234 issued rwts: total=0,163063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:35.234 00:13:35.234 Run status group 0 (all jobs): 00:13:35.234 WRITE: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=637MiB (668MB), run=5002-5002msec 00:13:35.234 ----------------------------------------------------- 00:13:35.234 Suppressions used: 00:13:35.234 count bytes template 00:13:35.234 1 11 /usr/src/fio/parse.c 00:13:35.234 1 8 libtcmalloc_minimal.so 00:13:35.234 1 904 libcrypto.so 00:13:35.234 ----------------------------------------------------- 00:13:35.234 00:13:35.234 00:13:35.234 real 0m12.044s 00:13:35.234 user 0m8.462s 00:13:35.234 sys 0m2.947s 00:13:35.234 ************************************ 00:13:35.234 END TEST xnvme_fio_plugin 00:13:35.234 ************************************ 00:13:35.234 05:03:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.234 05:03:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:35.234 05:03:04 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:35.234 05:03:04 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:35.234 05:03:04 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:35.234 05:03:04 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:35.234 05:03:04 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:35.234 05:03:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:35.234 05:03:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:35.234 05:03:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:35.234 05:03:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:35.234 05:03:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:35.234 05:03:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.234 05:03:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.234 ************************************ 00:13:35.234 START TEST xnvme_rpc 00:13:35.234 ************************************ 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81972 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81972 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81972 ']' 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.234 05:03:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.496 [2024-11-28 05:03:04.545774] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:35.496 [2024-11-28 05:03:04.545925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81972 ] 00:13:35.496 [2024-11-28 05:03:04.685261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.496 [2024-11-28 05:03:04.714001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.438 xnvme_bdev 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81972 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81972 ']' 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81972 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81972 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.438 killing process with pid 81972 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81972' 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81972 00:13:36.438 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81972 00:13:36.700 00:13:36.700 real 0m1.409s 00:13:36.700 user 0m1.509s 00:13:36.700 sys 0m0.397s 00:13:36.700 ************************************ 00:13:36.700 END TEST xnvme_rpc 00:13:36.700 ************************************ 00:13:36.700 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.700 05:03:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.700 05:03:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:36.700 05:03:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:36.700 05:03:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.700 05:03:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.700 ************************************ 00:13:36.700 START TEST xnvme_bdevperf 00:13:36.700 ************************************ 00:13:36.700 05:03:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:36.700 05:03:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:36.700 05:03:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:13:36.700 05:03:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:36.700 05:03:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:36.700 05:03:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:36.700 05:03:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:36.701 05:03:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:36.701 { 00:13:36.701 "subsystems": [ 00:13:36.701 { 00:13:36.701 "subsystem": "bdev", 00:13:36.701 "config": [ 00:13:36.701 { 00:13:36.701 "params": { 00:13:36.701 "io_mechanism": "io_uring_cmd", 00:13:36.701 "conserve_cpu": false, 00:13:36.701 "filename": "/dev/ng0n1", 00:13:36.701 "name": "xnvme_bdev" 00:13:36.701 }, 00:13:36.701 "method": "bdev_xnvme_create" 00:13:36.701 }, 00:13:36.701 { 00:13:36.701 "method": "bdev_wait_for_examine" 00:13:36.701 } 00:13:36.701 ] 00:13:36.701 } 00:13:36.701 ] 00:13:36.701 } 00:13:36.961 [2024-11-28 05:03:06.004524] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:36.961 [2024-11-28 05:03:06.004660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82024 ] 00:13:36.961 [2024-11-28 05:03:06.147896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.961 [2024-11-28 05:03:06.176468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.223 Running I/O for 5 seconds... 00:13:39.112 31680.00 IOPS, 123.75 MiB/s [2024-11-28T05:03:09.341Z] 31520.00 IOPS, 123.12 MiB/s [2024-11-28T05:03:10.285Z] 31509.33 IOPS, 123.08 MiB/s [2024-11-28T05:03:11.672Z] 32432.00 IOPS, 126.69 MiB/s 00:13:42.388 Latency(us) 00:13:42.388 [2024-11-28T05:03:11.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.388 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:42.388 xnvme_bdev : 5.00 33046.09 129.09 0.00 0.00 1933.32 1077.56 6099.89 00:13:42.388 [2024-11-28T05:03:11.672Z] =================================================================================================================== 00:13:42.388 [2024-11-28T05:03:11.672Z] Total : 33046.09 129.09 0.00 0.00 1933.32 1077.56 6099.89 00:13:42.388 05:03:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:42.388 05:03:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:42.388 05:03:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:42.388 05:03:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:42.388 05:03:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:42.388 { 00:13:42.388 "subsystems": [ 00:13:42.388 { 00:13:42.388 "subsystem": "bdev", 00:13:42.388 "config": [ 00:13:42.388 { 00:13:42.388 "params": { 00:13:42.388 "io_mechanism": "io_uring_cmd", 00:13:42.388 "conserve_cpu": false, 00:13:42.388 "filename": "/dev/ng0n1", 00:13:42.388 "name": "xnvme_bdev" 00:13:42.388 }, 00:13:42.388 "method": "bdev_xnvme_create" 00:13:42.388 }, 00:13:42.388 { 00:13:42.388 "method": "bdev_wait_for_examine" 00:13:42.388 } 00:13:42.388 ] 00:13:42.388 } 00:13:42.388 ] 00:13:42.388 } 00:13:42.388 [2024-11-28 05:03:11.517471] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:42.388 [2024-11-28 05:03:11.517603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82093 ] 00:13:42.388 [2024-11-28 05:03:11.665054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.650 [2024-11-28 05:03:11.694148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.650 Running I/O for 5 seconds... 00:13:44.538 33779.00 IOPS, 131.95 MiB/s [2024-11-28T05:03:15.227Z] 33945.00 IOPS, 132.60 MiB/s [2024-11-28T05:03:15.806Z] 33911.00 IOPS, 132.46 MiB/s [2024-11-28T05:03:16.791Z] 34408.00 IOPS, 134.41 MiB/s 00:13:47.507 Latency(us) 00:13:47.507 [2024-11-28T05:03:16.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.507 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:47.507 xnvme_bdev : 5.00 34698.44 135.54 0.00 0.00 1841.02 875.91 4007.78 00:13:47.507 [2024-11-28T05:03:16.791Z] =================================================================================================================== 00:13:47.507 [2024-11-28T05:03:16.791Z] Total : 34698.44 135.54 0.00 0.00 1841.02 875.91 4007.78 00:13:47.767 05:03:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:47.767 05:03:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:13:47.767 05:03:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:47.767 05:03:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:47.767 05:03:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:47.767 { 00:13:47.767 "subsystems": [ 00:13:47.767 { 00:13:47.767 "subsystem": "bdev", 00:13:47.767 "config": [ 00:13:47.767 { 00:13:47.767 "params": { 00:13:47.767 "io_mechanism": "io_uring_cmd", 00:13:47.767 "conserve_cpu": false, 00:13:47.767 "filename": "/dev/ng0n1", 00:13:47.767 "name": "xnvme_bdev" 00:13:47.767 }, 00:13:47.767 "method": "bdev_xnvme_create" 00:13:47.767 }, 00:13:47.767 { 00:13:47.767 "method": "bdev_wait_for_examine" 00:13:47.767 } 00:13:47.767 ] 00:13:47.767 } 00:13:47.767 ] 00:13:47.767 } 00:13:47.767 [2024-11-28 05:03:17.049436] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:47.767 [2024-11-28 05:03:17.049598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82157 ] 00:13:48.028 [2024-11-28 05:03:17.199541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.028 [2024-11-28 05:03:17.227991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.289 Running I/O for 5 seconds... 00:13:50.172 73408.00 IOPS, 286.75 MiB/s [2024-11-28T05:03:20.398Z] 75744.00 IOPS, 295.88 MiB/s [2024-11-28T05:03:21.339Z] 77738.67 IOPS, 303.67 MiB/s [2024-11-28T05:03:22.713Z] 76272.00 IOPS, 297.94 MiB/s 00:13:53.429 Latency(us) 00:13:53.429 [2024-11-28T05:03:22.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.429 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:13:53.429 xnvme_bdev : 5.00 79935.27 312.25 0.00 0.00 797.12 482.07 3428.04 00:13:53.429 [2024-11-28T05:03:22.713Z] =================================================================================================================== 00:13:53.429 [2024-11-28T05:03:22.713Z] Total : 79935.27 312.25 0.00 0.00 797.12 482.07 3428.04 00:13:53.429 05:03:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:53.429 05:03:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:53.429 05:03:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:13:53.429 05:03:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:53.429 05:03:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:53.429 { 00:13:53.429 "subsystems": [ 00:13:53.429 { 00:13:53.429 "subsystem": "bdev", 00:13:53.429 "config": [ 00:13:53.429 { 00:13:53.429 "params": { 00:13:53.429 "io_mechanism": "io_uring_cmd", 00:13:53.429 "conserve_cpu": false, 00:13:53.429 "filename": "/dev/ng0n1", 00:13:53.429 "name": "xnvme_bdev" 00:13:53.429 }, 00:13:53.429 "method": "bdev_xnvme_create" 00:13:53.429 }, 00:13:53.429 { 00:13:53.429 "method": "bdev_wait_for_examine" 00:13:53.429 } 00:13:53.429 ] 00:13:53.429 } 00:13:53.429 ] 00:13:53.429 } 00:13:53.429 [2024-11-28 05:03:22.546020] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:53.429 [2024-11-28 05:03:22.546136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82223 ] 00:13:53.429 [2024-11-28 05:03:22.692791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.689 [2024-11-28 05:03:22.720333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.689 Running I/O for 5 seconds... 00:13:55.559 46117.00 IOPS, 180.14 MiB/s [2024-11-28T05:03:26.222Z] 43462.00 IOPS, 169.77 MiB/s [2024-11-28T05:03:27.163Z] 40912.33 IOPS, 159.81 MiB/s [2024-11-28T05:03:28.102Z] 39380.75 IOPS, 153.83 MiB/s [2024-11-28T05:03:28.102Z] 38596.80 IOPS, 150.77 MiB/s 00:13:58.818 Latency(us) 00:13:58.818 [2024-11-28T05:03:28.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.818 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:13:58.818 xnvme_bdev : 5.00 38577.87 150.69 0.00 0.00 1654.69 222.13 21273.99 00:13:58.818 [2024-11-28T05:03:28.102Z] =================================================================================================================== 00:13:58.818 [2024-11-28T05:03:28.102Z] Total : 38577.87 150.69 0.00 0.00 1654.69 222.13 21273.99 00:13:58.818 00:13:58.818 real 0m22.126s 00:13:58.818 user 0m11.147s 00:13:58.818 sys 0m10.499s 00:13:58.818 05:03:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.818 05:03:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:58.818 ************************************ 00:13:58.818 END TEST xnvme_bdevperf 00:13:58.818 ************************************ 00:13:59.079 05:03:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:59.079 05:03:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:59.079 05:03:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.079 05:03:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.079 ************************************ 00:13:59.079 START TEST xnvme_fio_plugin 00:13:59.079 ************************************ 00:13:59.079 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:59.079 05:03:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:59.079 05:03:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:13:59.079 05:03:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:59.079 05:03:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:59.079 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:59.079 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:59.079 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:59.079 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:59.079 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:59.079 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:59.079 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:59.080 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:59.080 05:03:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:59.080 05:03:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:59.080 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:59.080 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:59.080 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:59.080 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:59.080 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:59.080 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:59.080 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:59.080 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:59.080 05:03:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:59.080 { 00:13:59.080 "subsystems": [ 00:13:59.080 { 00:13:59.080 "subsystem": "bdev", 00:13:59.080 "config": [ 00:13:59.080 { 00:13:59.080 "params": { 00:13:59.080 "io_mechanism": "io_uring_cmd", 00:13:59.080 "conserve_cpu": false, 00:13:59.080 "filename": "/dev/ng0n1", 00:13:59.080 "name": "xnvme_bdev" 00:13:59.080 }, 00:13:59.080 "method": "bdev_xnvme_create" 00:13:59.080 }, 00:13:59.080 { 00:13:59.080 "method": "bdev_wait_for_examine" 00:13:59.080 } 00:13:59.080 ] 00:13:59.080 } 00:13:59.080 ] 00:13:59.080 } 00:13:59.080 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:59.080 fio-3.35 00:13:59.080 Starting 1 thread 00:14:05.668 00:14:05.668 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82326: Thu Nov 28 05:03:33 2024 00:14:05.668 read: IOPS=36.6k, BW=143MiB/s (150MB/s)(715MiB/5002msec) 00:14:05.668 slat (usec): min=2, max=169, avg= 3.36, stdev= 1.65 00:14:05.668 clat (usec): min=831, max=6453, avg=1614.89, stdev=346.44 00:14:05.668 lat (usec): min=834, max=6456, avg=1618.25, stdev=346.67 00:14:05.668 clat percentiles (usec): 00:14:05.668 | 1.00th=[ 1020], 5.00th=[ 1139], 10.00th=[ 1205], 20.00th=[ 1303], 00:14:05.668 | 30.00th=[ 1401], 40.00th=[ 1500], 50.00th=[ 1582], 60.00th=[ 1680], 00:14:05.668 | 70.00th=[ 1778], 80.00th=[ 1893], 90.00th=[ 2057], 95.00th=[ 2212], 00:14:05.668 | 99.00th=[ 2606], 99.50th=[ 2769], 99.90th=[ 3130], 99.95th=[ 3523], 00:14:05.668 | 99.99th=[ 3982] 00:14:05.668 bw ( KiB/s): min=137216, max=157184, per=99.66%, avg=145918.22, stdev=7845.55, samples=9 00:14:05.668 iops : min=34304, max=39296, avg=36479.56, stdev=1961.39, samples=9 00:14:05.668 lat (usec) : 1000=0.67% 00:14:05.668 lat (msec) : 2=86.47%, 4=12.86%, 10=0.01% 00:14:05.668 cpu : usr=39.67%, sys=59.21%, ctx=12, majf=0, minf=1063 00:14:05.668 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:05.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.668 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:05.668 issued rwts: total=183102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.668 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.668 00:14:05.668 Run status group 0 (all jobs): 00:14:05.668 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=715MiB (750MB), run=5002-5002msec 00:14:05.668 ----------------------------------------------------- 00:14:05.668 Suppressions used: 00:14:05.668 count bytes template 00:14:05.668 1 11 /usr/src/fio/parse.c 00:14:05.668 1 8 libtcmalloc_minimal.so 00:14:05.668 1 904 libcrypto.so 00:14:05.668 ----------------------------------------------------- 00:14:05.668 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:05.668 05:03:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:05.668 { 00:14:05.668 "subsystems": [ 00:14:05.668 { 00:14:05.668 "subsystem": "bdev", 00:14:05.668 "config": [ 00:14:05.668 { 00:14:05.668 "params": { 00:14:05.668 "io_mechanism": "io_uring_cmd", 00:14:05.668 "conserve_cpu": false, 00:14:05.668 "filename": "/dev/ng0n1", 00:14:05.668 "name": "xnvme_bdev" 00:14:05.668 }, 00:14:05.668 "method": "bdev_xnvme_create" 00:14:05.668 }, 00:14:05.668 { 00:14:05.668 "method": "bdev_wait_for_examine" 00:14:05.668 } 00:14:05.668 ] 00:14:05.668 } 00:14:05.668 ] 00:14:05.668 } 00:14:05.668 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:05.668 fio-3.35 00:14:05.668 Starting 1 thread 00:14:10.959 00:14:10.959 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82411: Thu Nov 28 05:03:39 2024 00:14:10.959 write: IOPS=38.2k, BW=149MiB/s (156MB/s)(746MiB/5001msec); 0 zone resets 00:14:10.959 slat (nsec): min=2933, max=97229, avg=4225.39, stdev=2441.28 00:14:10.959 clat (usec): min=205, max=6457, avg=1520.56, stdev=404.66 00:14:10.959 lat (usec): min=208, max=6460, avg=1524.78, stdev=405.16 00:14:10.959 clat percentiles (usec): 00:14:10.959 | 1.00th=[ 668], 5.00th=[ 906], 10.00th=[ 1057], 20.00th=[ 1221], 00:14:10.959 | 30.00th=[ 1336], 40.00th=[ 1418], 50.00th=[ 1500], 60.00th=[ 1598], 00:14:10.959 | 70.00th=[ 1680], 80.00th=[ 1795], 90.00th=[ 1958], 95.00th=[ 2147], 00:14:10.959 | 99.00th=[ 2638], 99.50th=[ 2999], 99.90th=[ 4555], 99.95th=[ 4948], 00:14:10.959 | 99.99th=[ 5669] 00:14:10.959 bw ( KiB/s): min=140088, max=166992, per=98.45%, avg=150320.89, stdev=7806.59, samples=9 00:14:10.959 iops : min=35022, max=41748, avg=37580.22, stdev=1951.65, samples=9 00:14:10.959 lat (usec) : 250=0.01%, 500=0.25%, 750=1.82%, 1000=5.50% 00:14:10.959 lat (msec) : 2=83.62%, 4=8.63%, 10=0.17% 00:14:10.959 cpu : usr=37.62%, sys=61.02%, ctx=8, majf=0, minf=1064 00:14:10.959 IO depths : 1=1.2%, 2=2.4%, 4=5.0%, 8=10.3%, 16=21.4%, 32=57.7%, >=64=2.0% 00:14:10.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.959 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.4%, 64=1.4%, >=64=0.0% 00:14:10.959 issued rwts: total=0,190906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.959 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:10.959 00:14:10.959 Run status group 0 (all jobs): 00:14:10.959 WRITE: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=746MiB (782MB), run=5001-5001msec 00:14:10.959 ----------------------------------------------------- 00:14:10.959 Suppressions used: 00:14:10.959 count bytes template 00:14:10.959 1 11 /usr/src/fio/parse.c 00:14:10.959 1 8 libtcmalloc_minimal.so 00:14:10.959 1 904 libcrypto.so 00:14:10.959 ----------------------------------------------------- 00:14:10.959 00:14:10.959 00:14:10.959 real 0m12.043s 00:14:10.959 user 0m4.985s 00:14:10.959 sys 0m6.629s 00:14:10.959 05:03:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.959 ************************************ 00:14:10.959 END TEST xnvme_fio_plugin 00:14:10.959 ************************************ 00:14:10.959 05:03:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:10.959 05:03:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:10.959 05:03:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:10.959 05:03:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:10.959 05:03:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:10.959 05:03:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:10.959 05:03:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.959 05:03:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.220 ************************************ 00:14:11.220 START TEST xnvme_rpc 00:14:11.220 ************************************ 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82491 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82491 00:14:11.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82491 ']' 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.220 05:03:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:11.220 [2024-11-28 05:03:40.336896] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:11.220 [2024-11-28 05:03:40.337064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82491 ] 00:14:11.220 [2024-11-28 05:03:40.485106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.481 [2024-11-28 05:03:40.514561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.053 xnvme_bdev 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.053 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:12.054 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82491 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82491 ']' 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82491 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82491 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.315 killing process with pid 82491 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82491' 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82491 00:14:12.315 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82491 00:14:12.576 00:14:12.576 real 0m1.429s 00:14:12.576 user 0m1.507s 00:14:12.576 sys 0m0.428s 00:14:12.576 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.576 05:03:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.576 ************************************ 00:14:12.576 END TEST xnvme_rpc 00:14:12.576 ************************************ 00:14:12.576 05:03:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:12.576 05:03:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:12.576 05:03:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.576 05:03:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:12.576 ************************************ 00:14:12.576 START TEST xnvme_bdevperf 00:14:12.576 ************************************ 00:14:12.576 05:03:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:12.576 05:03:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:12.576 05:03:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:12.576 05:03:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:12.576 05:03:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:12.576 05:03:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:12.576 05:03:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:12.576 05:03:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:12.576 { 00:14:12.576 "subsystems": [ 00:14:12.576 { 00:14:12.576 "subsystem": "bdev", 00:14:12.576 "config": [ 00:14:12.576 { 00:14:12.576 "params": { 00:14:12.576 "io_mechanism": "io_uring_cmd", 00:14:12.576 "conserve_cpu": true, 00:14:12.576 "filename": "/dev/ng0n1", 00:14:12.576 "name": "xnvme_bdev" 00:14:12.576 }, 00:14:12.576 "method": "bdev_xnvme_create" 00:14:12.576 }, 00:14:12.576 { 00:14:12.576 "method": "bdev_wait_for_examine" 00:14:12.576 } 00:14:12.576 ] 00:14:12.576 } 00:14:12.576 ] 00:14:12.576 } 00:14:12.576 [2024-11-28 05:03:41.821234] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:12.576 [2024-11-28 05:03:41.821369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82547 ] 00:14:12.837 [2024-11-28 05:03:41.966917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.837 [2024-11-28 05:03:41.995759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.837 Running I/O for 5 seconds... 00:14:15.170 33408.00 IOPS, 130.50 MiB/s [2024-11-28T05:03:45.399Z] 33568.00 IOPS, 131.12 MiB/s [2024-11-28T05:03:46.343Z] 33792.00 IOPS, 132.00 MiB/s [2024-11-28T05:03:47.287Z] 33760.00 IOPS, 131.88 MiB/s 00:14:18.003 Latency(us) 00:14:18.003 [2024-11-28T05:03:47.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.003 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:18.003 xnvme_bdev : 5.00 33638.90 131.40 0.00 0.00 1898.51 938.93 4915.20 00:14:18.003 [2024-11-28T05:03:47.287Z] =================================================================================================================== 00:14:18.003 [2024-11-28T05:03:47.287Z] Total : 33638.90 131.40 0.00 0.00 1898.51 938.93 4915.20 00:14:18.003 05:03:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:18.004 05:03:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:18.265 05:03:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:18.265 05:03:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:18.265 05:03:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:18.265 { 00:14:18.265 "subsystems": [ 00:14:18.265 { 00:14:18.265 "subsystem": "bdev", 00:14:18.265 "config": [ 00:14:18.265 { 00:14:18.265 "params": { 00:14:18.265 "io_mechanism": "io_uring_cmd", 00:14:18.265 "conserve_cpu": true, 00:14:18.265 "filename": "/dev/ng0n1", 00:14:18.265 "name": "xnvme_bdev" 00:14:18.265 }, 00:14:18.265 "method": "bdev_xnvme_create" 00:14:18.265 }, 00:14:18.265 { 00:14:18.265 "method": "bdev_wait_for_examine" 00:14:18.265 } 00:14:18.265 ] 00:14:18.265 } 00:14:18.265 ] 00:14:18.265 } 00:14:18.265 [2024-11-28 05:03:47.352000] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:18.265 [2024-11-28 05:03:47.352144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82616 ] 00:14:18.265 [2024-11-28 05:03:47.499711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.265 [2024-11-28 05:03:47.529755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.526 Running I/O for 5 seconds... 00:14:20.417 35621.00 IOPS, 139.14 MiB/s [2024-11-28T05:03:50.717Z] 35612.00 IOPS, 139.11 MiB/s [2024-11-28T05:03:51.664Z] 36311.67 IOPS, 141.84 MiB/s [2024-11-28T05:03:53.050Z] 36099.75 IOPS, 141.01 MiB/s 00:14:23.766 Latency(us) 00:14:23.766 [2024-11-28T05:03:53.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.766 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:23.766 xnvme_bdev : 5.00 35834.97 139.98 0.00 0.00 1781.65 683.72 5419.32 00:14:23.766 [2024-11-28T05:03:53.050Z] =================================================================================================================== 00:14:23.766 [2024-11-28T05:03:53.050Z] Total : 35834.97 139.98 0.00 0.00 1781.65 683.72 5419.32 00:14:23.766 05:03:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:23.766 05:03:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:23.766 05:03:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:23.766 05:03:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:23.766 05:03:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:23.766 { 00:14:23.766 "subsystems": [ 00:14:23.766 { 00:14:23.766 "subsystem": "bdev", 00:14:23.766 "config": [ 00:14:23.766 { 00:14:23.766 "params": { 00:14:23.766 "io_mechanism": "io_uring_cmd", 00:14:23.766 "conserve_cpu": true, 00:14:23.766 "filename": "/dev/ng0n1", 00:14:23.766 "name": "xnvme_bdev" 00:14:23.766 }, 00:14:23.766 "method": "bdev_xnvme_create" 00:14:23.766 }, 00:14:23.766 { 00:14:23.766 "method": "bdev_wait_for_examine" 00:14:23.766 } 00:14:23.766 ] 00:14:23.766 } 00:14:23.766 ] 00:14:23.766 } 00:14:23.766 [2024-11-28 05:03:52.874872] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:23.766 [2024-11-28 05:03:52.875010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82679 ] 00:14:23.766 [2024-11-28 05:03:53.023391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.028 [2024-11-28 05:03:53.052987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.028 Running I/O for 5 seconds... 00:14:25.922 79232.00 IOPS, 309.50 MiB/s [2024-11-28T05:03:56.593Z] 79744.00 IOPS, 311.50 MiB/s [2024-11-28T05:03:57.166Z] 79893.33 IOPS, 312.08 MiB/s [2024-11-28T05:03:58.551Z] 79840.00 IOPS, 311.88 MiB/s [2024-11-28T05:03:58.551Z] 79436.80 IOPS, 310.30 MiB/s 00:14:29.267 Latency(us) 00:14:29.267 [2024-11-28T05:03:58.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.267 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:29.267 xnvme_bdev : 5.00 79412.39 310.20 0.00 0.00 802.46 453.71 3478.45 00:14:29.267 [2024-11-28T05:03:58.551Z] =================================================================================================================== 00:14:29.267 [2024-11-28T05:03:58.551Z] Total : 79412.39 310.20 0.00 0.00 802.46 453.71 3478.45 00:14:29.267 05:03:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:29.267 05:03:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:29.267 05:03:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:29.267 05:03:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:29.267 05:03:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:29.267 { 00:14:29.267 "subsystems": [ 00:14:29.267 { 00:14:29.267 "subsystem": "bdev", 00:14:29.267 "config": [ 00:14:29.267 { 00:14:29.267 "params": { 00:14:29.267 "io_mechanism": "io_uring_cmd", 00:14:29.267 "conserve_cpu": true, 00:14:29.267 "filename": "/dev/ng0n1", 00:14:29.267 "name": "xnvme_bdev" 00:14:29.267 }, 00:14:29.267 "method": "bdev_xnvme_create" 00:14:29.267 }, 00:14:29.267 { 00:14:29.267 "method": "bdev_wait_for_examine" 00:14:29.267 } 00:14:29.267 ] 00:14:29.267 } 00:14:29.267 ] 00:14:29.267 } 00:14:29.267 [2024-11-28 05:03:58.407336] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:29.268 [2024-11-28 05:03:58.407480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82742 ] 00:14:29.529 [2024-11-28 05:03:58.553847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.529 [2024-11-28 05:03:58.582981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.529 Running I/O for 5 seconds... 00:14:31.411 42624.00 IOPS, 166.50 MiB/s [2024-11-28T05:04:02.077Z] 46153.50 IOPS, 180.29 MiB/s [2024-11-28T05:04:03.015Z] 44972.33 IOPS, 175.67 MiB/s [2024-11-28T05:04:03.958Z] 43629.00 IOPS, 170.43 MiB/s [2024-11-28T05:04:03.958Z] 42722.00 IOPS, 166.88 MiB/s 00:14:34.674 Latency(us) 00:14:34.674 [2024-11-28T05:04:03.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.674 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:34.674 xnvme_bdev : 5.00 42698.82 166.79 0.00 0.00 1493.43 280.42 20366.57 00:14:34.674 [2024-11-28T05:04:03.958Z] =================================================================================================================== 00:14:34.674 [2024-11-28T05:04:03.958Z] Total : 42698.82 166.79 0.00 0.00 1493.43 280.42 20366.57 00:14:34.674 00:14:34.674 real 0m22.184s 00:14:34.674 user 0m13.705s 00:14:34.674 sys 0m6.162s 00:14:34.674 05:04:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.674 ************************************ 00:14:34.674 END TEST xnvme_bdevperf 00:14:34.674 ************************************ 00:14:34.674 05:04:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:34.936 05:04:03 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:34.936 05:04:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:34.936 05:04:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.936 05:04:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:34.936 ************************************ 00:14:34.936 START TEST xnvme_fio_plugin 00:14:34.936 ************************************ 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:34.936 05:04:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.936 { 00:14:34.936 "subsystems": [ 00:14:34.936 { 00:14:34.936 "subsystem": "bdev", 00:14:34.936 "config": [ 00:14:34.936 { 00:14:34.936 "params": { 00:14:34.936 "io_mechanism": "io_uring_cmd", 00:14:34.936 "conserve_cpu": true, 00:14:34.936 "filename": "/dev/ng0n1", 00:14:34.936 "name": "xnvme_bdev" 00:14:34.936 }, 00:14:34.936 "method": "bdev_xnvme_create" 00:14:34.936 }, 00:14:34.936 { 00:14:34.936 "method": "bdev_wait_for_examine" 00:14:34.936 } 00:14:34.936 ] 00:14:34.936 } 00:14:34.936 ] 00:14:34.936 } 00:14:34.936 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:34.936 fio-3.35 00:14:34.936 Starting 1 thread 00:14:41.527 00:14:41.527 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82848: Thu Nov 28 05:04:09 2024 00:14:41.527 read: IOPS=39.3k, BW=154MiB/s (161MB/s)(768MiB/5002msec) 00:14:41.527 slat (nsec): min=2888, max=57602, avg=3580.95, stdev=1848.75 00:14:41.527 clat (usec): min=812, max=4360, avg=1483.45, stdev=285.09 00:14:41.527 lat (usec): min=815, max=4366, avg=1487.03, stdev=285.59 00:14:41.527 clat percentiles (usec): 00:14:41.527 | 1.00th=[ 996], 5.00th=[ 1106], 10.00th=[ 1156], 20.00th=[ 1237], 00:14:41.527 | 30.00th=[ 1303], 40.00th=[ 1385], 50.00th=[ 1450], 60.00th=[ 1532], 00:14:41.527 | 70.00th=[ 1614], 80.00th=[ 1696], 90.00th=[ 1860], 95.00th=[ 2008], 00:14:41.527 | 99.00th=[ 2245], 99.50th=[ 2376], 99.90th=[ 2868], 99.95th=[ 3195], 00:14:41.527 | 99.99th=[ 4293] 00:14:41.527 bw ( KiB/s): min=142336, max=180224, per=99.94%, avg=157184.00, stdev=13824.00, samples=9 00:14:41.527 iops : min=35584, max=45056, avg=39296.00, stdev=3456.00, samples=9 00:14:41.527 lat (usec) : 1000=1.08% 00:14:41.527 lat (msec) : 2=93.85%, 4=5.04%, 10=0.03% 00:14:41.527 cpu : usr=64.13%, sys=32.69%, ctx=7, majf=0, minf=1063 00:14:41.527 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:41.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.527 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:41.527 issued rwts: total=196672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.527 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:41.527 00:14:41.527 Run status group 0 (all jobs): 00:14:41.527 READ: bw=154MiB/s (161MB/s), 154MiB/s-154MiB/s (161MB/s-161MB/s), io=768MiB (806MB), run=5002-5002msec 00:14:41.527 ----------------------------------------------------- 00:14:41.527 Suppressions used: 00:14:41.527 count bytes template 00:14:41.527 1 11 /usr/src/fio/parse.c 00:14:41.527 1 8 libtcmalloc_minimal.so 00:14:41.527 1 904 libcrypto.so 00:14:41.527 ----------------------------------------------------- 00:14:41.527 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:41.527 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:41.528 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:41.528 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:41.528 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:41.528 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:41.528 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:41.528 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:41.528 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:41.528 05:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.528 { 00:14:41.528 "subsystems": [ 00:14:41.528 { 00:14:41.528 "subsystem": "bdev", 00:14:41.528 "config": [ 00:14:41.528 { 00:14:41.528 "params": { 00:14:41.528 "io_mechanism": "io_uring_cmd", 00:14:41.528 "conserve_cpu": true, 00:14:41.528 "filename": "/dev/ng0n1", 00:14:41.528 "name": "xnvme_bdev" 00:14:41.528 }, 00:14:41.528 "method": "bdev_xnvme_create" 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "method": "bdev_wait_for_examine" 00:14:41.528 } 00:14:41.528 ] 00:14:41.528 } 00:14:41.528 ] 00:14:41.528 } 00:14:41.528 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:41.528 fio-3.35 00:14:41.528 Starting 1 thread 00:14:46.822 00:14:46.822 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82930: Thu Nov 28 05:04:15 2024 00:14:46.822 write: IOPS=38.8k, BW=152MiB/s (159MB/s)(758MiB/5002msec); 0 zone resets 00:14:46.823 slat (usec): min=2, max=517, avg= 4.27, stdev= 4.02 00:14:46.823 clat (usec): min=202, max=9981, avg=1485.74, stdev=354.04 00:14:46.823 lat (usec): min=206, max=9985, avg=1490.01, stdev=354.44 00:14:46.823 clat percentiles (usec): 00:14:46.823 | 1.00th=[ 873], 5.00th=[ 1037], 10.00th=[ 1123], 20.00th=[ 1221], 00:14:46.823 | 30.00th=[ 1303], 40.00th=[ 1369], 50.00th=[ 1450], 60.00th=[ 1516], 00:14:46.823 | 70.00th=[ 1614], 80.00th=[ 1713], 90.00th=[ 1876], 95.00th=[ 2024], 00:14:46.823 | 99.00th=[ 2474], 99.50th=[ 2933], 99.90th=[ 4293], 99.95th=[ 5014], 00:14:46.823 | 99.99th=[ 8029] 00:14:46.823 bw ( KiB/s): min=144768, max=167073, per=99.63%, avg=154665.89, stdev=7181.20, samples=9 00:14:46.823 iops : min=36192, max=41768, avg=38666.44, stdev=1795.25, samples=9 00:14:46.823 lat (usec) : 250=0.01%, 500=0.03%, 750=0.24%, 1000=3.17% 00:14:46.823 lat (msec) : 2=91.04%, 4=5.39%, 10=0.13% 00:14:46.823 cpu : usr=46.05%, sys=44.67%, ctx=17, majf=0, minf=1064 00:14:46.823 IO depths : 1=1.2%, 2=2.5%, 4=5.4%, 8=11.5%, 16=24.6%, 32=52.9%, >=64=1.9% 00:14:46.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.823 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:46.823 issued rwts: total=0,194134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.823 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:46.823 00:14:46.823 Run status group 0 (all jobs): 00:14:46.823 WRITE: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=758MiB (795MB), run=5002-5002msec 00:14:47.084 ----------------------------------------------------- 00:14:47.084 Suppressions used: 00:14:47.084 count bytes template 00:14:47.084 1 11 /usr/src/fio/parse.c 00:14:47.084 1 8 libtcmalloc_minimal.so 00:14:47.084 1 904 libcrypto.so 00:14:47.084 ----------------------------------------------------- 00:14:47.084 00:14:47.084 00:14:47.084 real 0m12.224s 00:14:47.084 user 0m6.740s 00:14:47.084 sys 0m4.538s 00:14:47.084 05:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.084 ************************************ 00:14:47.084 END TEST xnvme_fio_plugin 00:14:47.084 ************************************ 00:14:47.084 05:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:47.084 05:04:16 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 82491 00:14:47.084 05:04:16 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 82491 ']' 00:14:47.084 Process with pid 82491 is not found 00:14:47.084 05:04:16 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 82491 00:14:47.084 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82491) - No such process 00:14:47.084 05:04:16 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 82491 is not found' 00:14:47.084 00:14:47.084 real 2m57.753s 00:14:47.084 user 1m27.439s 00:14:47.084 sys 1m16.117s 00:14:47.084 05:04:16 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.084 ************************************ 00:14:47.084 END TEST nvme_xnvme 00:14:47.084 ************************************ 00:14:47.084 05:04:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.084 05:04:16 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:47.084 05:04:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:47.084 05:04:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.084 05:04:16 -- common/autotest_common.sh@10 -- # set +x 00:14:47.084 ************************************ 00:14:47.084 START TEST blockdev_xnvme 00:14:47.084 ************************************ 00:14:47.084 05:04:16 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:47.347 * Looking for test storage... 00:14:47.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:47.347 05:04:16 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:47.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.347 --rc genhtml_branch_coverage=1 00:14:47.347 --rc genhtml_function_coverage=1 00:14:47.347 --rc genhtml_legend=1 00:14:47.347 --rc geninfo_all_blocks=1 00:14:47.347 --rc geninfo_unexecuted_blocks=1 00:14:47.347 00:14:47.347 ' 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:47.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.347 --rc genhtml_branch_coverage=1 00:14:47.347 --rc genhtml_function_coverage=1 00:14:47.347 --rc genhtml_legend=1 00:14:47.347 --rc geninfo_all_blocks=1 00:14:47.347 --rc geninfo_unexecuted_blocks=1 00:14:47.347 00:14:47.347 ' 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:47.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.347 --rc genhtml_branch_coverage=1 00:14:47.347 --rc genhtml_function_coverage=1 00:14:47.347 --rc genhtml_legend=1 00:14:47.347 --rc geninfo_all_blocks=1 00:14:47.347 --rc geninfo_unexecuted_blocks=1 00:14:47.347 00:14:47.347 ' 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:47.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.347 --rc genhtml_branch_coverage=1 00:14:47.347 --rc genhtml_function_coverage=1 00:14:47.347 --rc genhtml_legend=1 00:14:47.347 --rc geninfo_all_blocks=1 00:14:47.347 --rc geninfo_unexecuted_blocks=1 00:14:47.347 00:14:47.347 ' 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=83059 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 83059 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 83059 ']' 00:14:47.347 05:04:16 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.347 05:04:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.348 [2024-11-28 05:04:16.623984] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:47.348 [2024-11-28 05:04:16.624374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83059 ] 00:14:47.609 [2024-11-28 05:04:16.769360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.609 [2024-11-28 05:04:16.810059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.556 05:04:17 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.556 05:04:17 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:14:48.556 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:14:48.556 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:14:48.556 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:48.556 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:48.556 05:04:17 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:48.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:49.390 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:14:49.390 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:14:49.390 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:14:49.390 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0c0n1 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0c0n1 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n2 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n2 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n3 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n3 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:14:49.390 05:04:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n2 ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n3 ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:14:49.390 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:49.391 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:49.391 05:04:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.391 05:04:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.391 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n2 nvme3n2 io_uring -c' 'bdev_xnvme_create /dev/nvme3n3 nvme3n3 io_uring -c' 00:14:49.391 nvme0n1 00:14:49.391 nvme1n1 00:14:49.391 nvme2n1 00:14:49.391 nvme3n1 00:14:49.391 nvme3n2 00:14:49.391 nvme3n3 00:14:49.391 05:04:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.391 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:14:49.391 05:04:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.391 05:04:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.391 05:04:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.391 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:14:49.391 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:14:49.391 05:04:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.391 05:04:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.652 05:04:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.652 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:14:49.652 05:04:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.652 05:04:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.652 05:04:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.652 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:49.652 05:04:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.652 05:04:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.652 05:04:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.652 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:14:49.652 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:14:49.652 05:04:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.653 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:14:49.653 05:04:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.653 05:04:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.653 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:14:49.653 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ba3eeef8-dc21-490a-b41f-9a9621bb84a8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ba3eeef8-dc21-490a-b41f-9a9621bb84a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e1e5b32c-c4c3-45bc-8a41-0f3b107b8d9d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e1e5b32c-c4c3-45bc-8a41-0f3b107b8d9d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4ad3ad7b-da56-4904-b313-0d51d3c21cd8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4ad3ad7b-da56-4904-b313-0d51d3c21cd8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "342388f1-52c2-433d-9970-3f153d4e9fa0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "342388f1-52c2-433d-9970-3f153d4e9fa0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n2",' ' "aliases": [' ' "b1c99600-f353-40d4-979e-934a969e4c51"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b1c99600-f353-40d4-979e-934a969e4c51",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n3",' ' "aliases": [' ' "caa06b25-4e09-4314-adee-9e6abe4a6e05"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "caa06b25-4e09-4314-adee-9e6abe4a6e05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:49.653 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:14:49.653 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:14:49.653 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:14:49.653 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:14:49.653 05:04:18 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 83059 00:14:49.653 05:04:18 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83059 ']' 00:14:49.653 05:04:18 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 83059 00:14:49.653 05:04:18 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:14:49.653 05:04:18 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.653 05:04:18 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83059 00:14:49.653 killing process with pid 83059 00:14:49.653 05:04:18 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:49.653 05:04:18 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:49.653 05:04:18 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83059' 00:14:49.653 05:04:18 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 83059 00:14:49.653 05:04:18 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 83059 00:14:50.225 05:04:19 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:50.225 05:04:19 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:50.225 05:04:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:50.225 05:04:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.225 05:04:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.225 ************************************ 00:14:50.225 START TEST bdev_hello_world 00:14:50.225 ************************************ 00:14:50.225 05:04:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:50.226 [2024-11-28 05:04:19.412372] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:50.226 [2024-11-28 05:04:19.412693] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83332 ] 00:14:50.486 [2024-11-28 05:04:19.560262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.486 [2024-11-28 05:04:19.597239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.747 [2024-11-28 05:04:19.851883] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:50.747 [2024-11-28 05:04:19.851953] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:50.747 [2024-11-28 05:04:19.851979] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:50.747 [2024-11-28 05:04:19.854381] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:50.747 [2024-11-28 05:04:19.855361] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:50.747 [2024-11-28 05:04:19.855417] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:50.747 [2024-11-28 05:04:19.856017] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:50.747 00:14:50.747 [2024-11-28 05:04:19.856055] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:51.009 ************************************ 00:14:51.009 END TEST bdev_hello_world 00:14:51.009 ************************************ 00:14:51.009 00:14:51.009 real 0m0.773s 00:14:51.009 user 0m0.400s 00:14:51.009 sys 0m0.231s 00:14:51.009 05:04:20 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.009 05:04:20 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:51.009 05:04:20 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:14:51.009 05:04:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.009 05:04:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.009 05:04:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.009 ************************************ 00:14:51.009 START TEST bdev_bounds 00:14:51.009 ************************************ 00:14:51.009 05:04:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:14:51.009 05:04:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=83365 00:14:51.009 Process bdevio pid: 83365 00:14:51.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.009 05:04:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:51.009 05:04:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 83365' 00:14:51.009 05:04:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 83365 00:14:51.009 05:04:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 83365 ']' 00:14:51.009 05:04:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:51.009 05:04:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.009 05:04:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.009 05:04:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.009 05:04:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.009 05:04:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:51.009 [2024-11-28 05:04:20.255682] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:51.009 [2024-11-28 05:04:20.255835] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83365 ] 00:14:51.271 [2024-11-28 05:04:20.405150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:51.271 [2024-11-28 05:04:20.448949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.271 [2024-11-28 05:04:20.449317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.271 [2024-11-28 05:04:20.449451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.218 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.218 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:14:52.218 05:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:52.218 I/O targets: 00:14:52.218 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:52.218 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:52.218 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:52.218 nvme3n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:52.218 nvme3n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:52.218 nvme3n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:52.218 00:14:52.218 00:14:52.218 CUnit - A unit testing framework for C - Version 2.1-3 00:14:52.218 http://cunit.sourceforge.net/ 00:14:52.218 00:14:52.218 00:14:52.218 Suite: bdevio tests on: nvme3n3 00:14:52.218 Test: blockdev write read block ...passed 00:14:52.218 Test: blockdev write zeroes read block ...passed 00:14:52.218 Test: blockdev write zeroes read no split ...passed 00:14:52.218 Test: blockdev write zeroes read split ...passed 00:14:52.218 Test: blockdev write zeroes read split partial ...passed 00:14:52.218 Test: blockdev reset ...passed 00:14:52.218 Test: blockdev write read 8 blocks ...passed 00:14:52.218 Test: blockdev write read size > 128k ...passed 00:14:52.218 Test: blockdev write read invalid size ...passed 00:14:52.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:52.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:52.218 Test: blockdev write read max offset ...passed 00:14:52.218 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:52.218 Test: blockdev writev readv 8 blocks ...passed 00:14:52.218 Test: blockdev writev readv 30 x 1block ...passed 00:14:52.218 Test: blockdev writev readv block ...passed 00:14:52.218 Test: blockdev writev readv size > 128k ...passed 00:14:52.218 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:52.218 Test: blockdev comparev and writev ...passed 00:14:52.218 Test: blockdev nvme passthru rw ...passed 00:14:52.218 Test: blockdev nvme passthru vendor specific ...passed 00:14:52.218 Test: blockdev nvme admin passthru ...passed 00:14:52.218 Test: blockdev copy ...passed 00:14:52.218 Suite: bdevio tests on: nvme3n2 00:14:52.218 Test: blockdev write read block ...passed 00:14:52.218 Test: blockdev write zeroes read block ...passed 00:14:52.218 Test: blockdev write zeroes read no split ...passed 00:14:52.218 Test: blockdev write zeroes read split ...passed 00:14:52.218 Test: blockdev write zeroes read split partial ...passed 00:14:52.218 Test: blockdev reset ...passed 00:14:52.218 Test: blockdev write read 8 blocks ...passed 00:14:52.218 Test: blockdev write read size > 128k ...passed 00:14:52.218 Test: blockdev write read invalid size ...passed 00:14:52.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:52.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:52.218 Test: blockdev write read max offset ...passed 00:14:52.218 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:52.218 Test: blockdev writev readv 8 blocks ...passed 00:14:52.218 Test: blockdev writev readv 30 x 1block ...passed 00:14:52.218 Test: blockdev writev readv block ...passed 00:14:52.218 Test: blockdev writev readv size > 128k ...passed 00:14:52.218 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:52.218 Test: blockdev comparev and writev ...passed 00:14:52.218 Test: blockdev nvme passthru rw ...passed 00:14:52.218 Test: blockdev nvme passthru vendor specific ...passed 00:14:52.218 Test: blockdev nvme admin passthru ...passed 00:14:52.218 Test: blockdev copy ...passed 00:14:52.218 Suite: bdevio tests on: nvme3n1 00:14:52.218 Test: blockdev write read block ...passed 00:14:52.218 Test: blockdev write zeroes read block ...passed 00:14:52.218 Test: blockdev write zeroes read no split ...passed 00:14:52.218 Test: blockdev write zeroes read split ...passed 00:14:52.218 Test: blockdev write zeroes read split partial ...passed 00:14:52.218 Test: blockdev reset ...passed 00:14:52.218 Test: blockdev write read 8 blocks ...passed 00:14:52.218 Test: blockdev write read size > 128k ...passed 00:14:52.218 Test: blockdev write read invalid size ...passed 00:14:52.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:52.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:52.219 Test: blockdev write read max offset ...passed 00:14:52.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:52.219 Test: blockdev writev readv 8 blocks ...passed 00:14:52.219 Test: blockdev writev readv 30 x 1block ...passed 00:14:52.219 Test: blockdev writev readv block ...passed 00:14:52.219 Test: blockdev writev readv size > 128k ...passed 00:14:52.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:52.219 Test: blockdev comparev and writev ...passed 00:14:52.219 Test: blockdev nvme passthru rw ...passed 00:14:52.219 Test: blockdev nvme passthru vendor specific ...passed 00:14:52.219 Test: blockdev nvme admin passthru ...passed 00:14:52.219 Test: blockdev copy ...passed 00:14:52.219 Suite: bdevio tests on: nvme2n1 00:14:52.219 Test: blockdev write read block ...passed 00:14:52.219 Test: blockdev write zeroes read block ...passed 00:14:52.219 Test: blockdev write zeroes read no split ...passed 00:14:52.219 Test: blockdev write zeroes read split ...passed 00:14:52.219 Test: blockdev write zeroes read split partial ...passed 00:14:52.219 Test: blockdev reset ...passed 00:14:52.219 Test: blockdev write read 8 blocks ...passed 00:14:52.219 Test: blockdev write read size > 128k ...passed 00:14:52.219 Test: blockdev write read invalid size ...passed 00:14:52.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:52.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:52.219 Test: blockdev write read max offset ...passed 00:14:52.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:52.219 Test: blockdev writev readv 8 blocks ...passed 00:14:52.219 Test: blockdev writev readv 30 x 1block ...passed 00:14:52.219 Test: blockdev writev readv block ...passed 00:14:52.219 Test: blockdev writev readv size > 128k ...passed 00:14:52.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:52.219 Test: blockdev comparev and writev ...passed 00:14:52.219 Test: blockdev nvme passthru rw ...passed 00:14:52.219 Test: blockdev nvme passthru vendor specific ...passed 00:14:52.219 Test: blockdev nvme admin passthru ...passed 00:14:52.219 Test: blockdev copy ...passed 00:14:52.219 Suite: bdevio tests on: nvme1n1 00:14:52.219 Test: blockdev write read block ...passed 00:14:52.219 Test: blockdev write zeroes read block ...passed 00:14:52.219 Test: blockdev write zeroes read no split ...passed 00:14:52.219 Test: blockdev write zeroes read split ...passed 00:14:52.219 Test: blockdev write zeroes read split partial ...passed 00:14:52.219 Test: blockdev reset ...passed 00:14:52.219 Test: blockdev write read 8 blocks ...passed 00:14:52.219 Test: blockdev write read size > 128k ...passed 00:14:52.219 Test: blockdev write read invalid size ...passed 00:14:52.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:52.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:52.219 Test: blockdev write read max offset ...passed 00:14:52.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:52.219 Test: blockdev writev readv 8 blocks ...passed 00:14:52.219 Test: blockdev writev readv 30 x 1block ...passed 00:14:52.219 Test: blockdev writev readv block ...passed 00:14:52.219 Test: blockdev writev readv size > 128k ...passed 00:14:52.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:52.219 Test: blockdev comparev and writev ...passed 00:14:52.219 Test: blockdev nvme passthru rw ...passed 00:14:52.219 Test: blockdev nvme passthru vendor specific ...passed 00:14:52.219 Test: blockdev nvme admin passthru ...passed 00:14:52.219 Test: blockdev copy ...passed 00:14:52.219 Suite: bdevio tests on: nvme0n1 00:14:52.219 Test: blockdev write read block ...passed 00:14:52.219 Test: blockdev write zeroes read block ...passed 00:14:52.219 Test: blockdev write zeroes read no split ...passed 00:14:52.219 Test: blockdev write zeroes read split ...passed 00:14:52.219 Test: blockdev write zeroes read split partial ...passed 00:14:52.219 Test: blockdev reset ...passed 00:14:52.219 Test: blockdev write read 8 blocks ...passed 00:14:52.219 Test: blockdev write read size > 128k ...passed 00:14:52.219 Test: blockdev write read invalid size ...passed 00:14:52.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:52.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:52.219 Test: blockdev write read max offset ...passed 00:14:52.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:52.219 Test: blockdev writev readv 8 blocks ...passed 00:14:52.219 Test: blockdev writev readv 30 x 1block ...passed 00:14:52.219 Test: blockdev writev readv block ...passed 00:14:52.480 Test: blockdev writev readv size > 128k ...passed 00:14:52.480 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:52.480 Test: blockdev comparev and writev ...passed 00:14:52.480 Test: blockdev nvme passthru rw ...passed 00:14:52.480 Test: blockdev nvme passthru vendor specific ...passed 00:14:52.480 Test: blockdev nvme admin passthru ...passed 00:14:52.480 Test: blockdev copy ...passed 00:14:52.480 00:14:52.480 Run Summary: Type Total Ran Passed Failed Inactive 00:14:52.480 suites 6 6 n/a 0 0 00:14:52.480 tests 138 138 138 0 0 00:14:52.480 asserts 780 780 780 0 n/a 00:14:52.480 00:14:52.480 Elapsed time = 0.588 seconds 00:14:52.480 0 00:14:52.480 05:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 83365 00:14:52.480 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 83365 ']' 00:14:52.480 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 83365 00:14:52.480 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:14:52.480 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.480 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83365 00:14:52.480 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.480 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.480 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83365' 00:14:52.480 killing process with pid 83365 00:14:52.480 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 83365 00:14:52.480 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 83365 00:14:52.742 05:04:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:52.742 00:14:52.742 real 0m1.625s 00:14:52.742 user 0m3.973s 00:14:52.742 sys 0m0.350s 00:14:52.742 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.742 05:04:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:52.742 ************************************ 00:14:52.742 END TEST bdev_bounds 00:14:52.742 ************************************ 00:14:52.742 05:04:21 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme3n1 nvme3n2 nvme3n3' '' 00:14:52.742 05:04:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:52.742 05:04:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.742 05:04:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:52.742 ************************************ 00:14:52.742 START TEST bdev_nbd 00:14:52.742 ************************************ 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme3n1 nvme3n2 nvme3n3' '' 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme3n1' 'nvme3n2' 'nvme3n3') 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme3n1' 'nvme3n2' 'nvme3n3') 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=83412 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 83412 /var/tmp/spdk-nbd.sock 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:52.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 83412 ']' 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.742 05:04:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:52.742 [2024-11-28 05:04:21.970441] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:52.742 [2024-11-28 05:04:21.970821] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.003 [2024-11-28 05:04:22.117880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.003 [2024-11-28 05:04:22.157765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme3n1 nvme3n2 nvme3n3' 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme3n1' 'nvme3n2' 'nvme3n3') 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme3n1 nvme3n2 nvme3n3' 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme3n1' 'nvme3n2' 'nvme3n3') 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:53.577 05:04:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.878 1+0 records in 00:14:53.878 1+0 records out 00:14:53.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105288 s, 3.9 MB/s 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:53.878 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.152 1+0 records in 00:14:54.152 1+0 records out 00:14:54.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116104 s, 3.5 MB/s 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:54.152 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:54.153 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:54.153 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:54.414 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:54.414 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:54.414 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:54.414 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.415 1+0 records in 00:14:54.415 1+0 records out 00:14:54.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123778 s, 3.3 MB/s 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:54.415 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.677 1+0 records in 00:14:54.677 1+0 records out 00:14:54.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000860088 s, 4.8 MB/s 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:54.677 05:04:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n2 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.938 1+0 records in 00:14:54.938 1+0 records out 00:14:54.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138687 s, 3.0 MB/s 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:54.938 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n3 00:14:55.198 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:55.198 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:55.198 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:55.198 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:14:55.198 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:55.198 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:55.198 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:55.198 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:14:55.198 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:55.198 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:55.198 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:55.199 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.199 1+0 records in 00:14:55.199 1+0 records out 00:14:55.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102982 s, 4.0 MB/s 00:14:55.199 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.199 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:55.199 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.199 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:55.199 05:04:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:55.199 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:55.199 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:55.199 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:55.461 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:55.461 { 00:14:55.461 "nbd_device": "/dev/nbd0", 00:14:55.461 "bdev_name": "nvme0n1" 00:14:55.461 }, 00:14:55.461 { 00:14:55.461 "nbd_device": "/dev/nbd1", 00:14:55.461 "bdev_name": "nvme1n1" 00:14:55.461 }, 00:14:55.461 { 00:14:55.461 "nbd_device": "/dev/nbd2", 00:14:55.461 "bdev_name": "nvme2n1" 00:14:55.461 }, 00:14:55.461 { 00:14:55.461 "nbd_device": "/dev/nbd3", 00:14:55.461 "bdev_name": "nvme3n1" 00:14:55.461 }, 00:14:55.461 { 00:14:55.461 "nbd_device": "/dev/nbd4", 00:14:55.461 "bdev_name": "nvme3n2" 00:14:55.461 }, 00:14:55.461 { 00:14:55.461 "nbd_device": "/dev/nbd5", 00:14:55.461 "bdev_name": "nvme3n3" 00:14:55.461 } 00:14:55.461 ]' 00:14:55.461 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:55.461 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:55.461 { 00:14:55.461 "nbd_device": "/dev/nbd0", 00:14:55.461 "bdev_name": "nvme0n1" 00:14:55.461 }, 00:14:55.461 { 00:14:55.461 "nbd_device": "/dev/nbd1", 00:14:55.461 "bdev_name": "nvme1n1" 00:14:55.461 }, 00:14:55.461 { 00:14:55.461 "nbd_device": "/dev/nbd2", 00:14:55.461 "bdev_name": "nvme2n1" 00:14:55.461 }, 00:14:55.461 { 00:14:55.461 "nbd_device": "/dev/nbd3", 00:14:55.461 "bdev_name": "nvme3n1" 00:14:55.461 }, 00:14:55.461 { 00:14:55.461 "nbd_device": "/dev/nbd4", 00:14:55.461 "bdev_name": "nvme3n2" 00:14:55.461 }, 00:14:55.461 { 00:14:55.461 "nbd_device": "/dev/nbd5", 00:14:55.461 "bdev_name": "nvme3n3" 00:14:55.461 } 00:14:55.461 ]' 00:14:55.461 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:55.461 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:55.461 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.461 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:55.461 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:55.461 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:55.461 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.461 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:55.722 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:55.722 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:55.722 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:55.722 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.722 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.722 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:55.722 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:55.722 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.722 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.722 05:04:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:55.984 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:55.984 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:55.984 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:55.984 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.984 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.984 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:55.984 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:55.984 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.984 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.984 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:56.245 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:56.245 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:56.245 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:56.245 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.245 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.245 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:56.245 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:56.245 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.245 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.245 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:56.507 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.508 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:56.769 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:56.769 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:56.769 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:56.769 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.769 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.769 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:56.769 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:56.769 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.769 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:56.769 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:56.769 05:04:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme3n1 nvme3n2 nvme3n3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme3n1' 'nvme3n2' 'nvme3n3') 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme3n1 nvme3n2 nvme3n3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme3n1' 'nvme3n2' 'nvme3n3') 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:57.031 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:57.293 /dev/nbd0 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.293 1+0 records in 00:14:57.293 1+0 records out 00:14:57.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000967194 s, 4.2 MB/s 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:57.293 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:57.554 /dev/nbd1 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.554 1+0 records in 00:14:57.554 1+0 records out 00:14:57.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000871091 s, 4.7 MB/s 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:57.554 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:57.815 /dev/nbd10 00:14:57.815 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:57.815 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.816 1+0 records in 00:14:57.816 1+0 records out 00:14:57.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107101 s, 3.8 MB/s 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:57.816 05:04:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd11 00:14:58.078 /dev/nbd11 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.078 1+0 records in 00:14:58.078 1+0 records out 00:14:58.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108246 s, 3.8 MB/s 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:58.078 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n2 /dev/nbd12 00:14:58.078 /dev/nbd12 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.339 1+0 records in 00:14:58.339 1+0 records out 00:14:58.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000832927 s, 4.9 MB/s 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n3 /dev/nbd13 00:14:58.339 /dev/nbd13 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.339 1+0 records in 00:14:58.339 1+0 records out 00:14:58.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115199 s, 3.6 MB/s 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:58.339 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.340 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:58.340 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:58.340 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:58.601 { 00:14:58.601 "nbd_device": "/dev/nbd0", 00:14:58.601 "bdev_name": "nvme0n1" 00:14:58.601 }, 00:14:58.601 { 00:14:58.601 "nbd_device": "/dev/nbd1", 00:14:58.601 "bdev_name": "nvme1n1" 00:14:58.601 }, 00:14:58.601 { 00:14:58.601 "nbd_device": "/dev/nbd10", 00:14:58.601 "bdev_name": "nvme2n1" 00:14:58.601 }, 00:14:58.601 { 00:14:58.601 "nbd_device": "/dev/nbd11", 00:14:58.601 "bdev_name": "nvme3n1" 00:14:58.601 }, 00:14:58.601 { 00:14:58.601 "nbd_device": "/dev/nbd12", 00:14:58.601 "bdev_name": "nvme3n2" 00:14:58.601 }, 00:14:58.601 { 00:14:58.601 "nbd_device": "/dev/nbd13", 00:14:58.601 "bdev_name": "nvme3n3" 00:14:58.601 } 00:14:58.601 ]' 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:58.601 { 00:14:58.601 "nbd_device": "/dev/nbd0", 00:14:58.601 "bdev_name": "nvme0n1" 00:14:58.601 }, 00:14:58.601 { 00:14:58.601 "nbd_device": "/dev/nbd1", 00:14:58.601 "bdev_name": "nvme1n1" 00:14:58.601 }, 00:14:58.601 { 00:14:58.601 "nbd_device": "/dev/nbd10", 00:14:58.601 "bdev_name": "nvme2n1" 00:14:58.601 }, 00:14:58.601 { 00:14:58.601 "nbd_device": "/dev/nbd11", 00:14:58.601 "bdev_name": "nvme3n1" 00:14:58.601 }, 00:14:58.601 { 00:14:58.601 "nbd_device": "/dev/nbd12", 00:14:58.601 "bdev_name": "nvme3n2" 00:14:58.601 }, 00:14:58.601 { 00:14:58.601 "nbd_device": "/dev/nbd13", 00:14:58.601 "bdev_name": "nvme3n3" 00:14:58.601 } 00:14:58.601 ]' 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:58.601 /dev/nbd1 00:14:58.601 /dev/nbd10 00:14:58.601 /dev/nbd11 00:14:58.601 /dev/nbd12 00:14:58.601 /dev/nbd13' 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:58.601 /dev/nbd1 00:14:58.601 /dev/nbd10 00:14:58.601 /dev/nbd11 00:14:58.601 /dev/nbd12 00:14:58.601 /dev/nbd13' 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:58.601 256+0 records in 00:14:58.601 256+0 records out 00:14:58.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0070752 s, 148 MB/s 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:58.601 05:04:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:58.863 256+0 records in 00:14:58.863 256+0 records out 00:14:58.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191637 s, 5.5 MB/s 00:14:58.863 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:58.863 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:59.125 256+0 records in 00:14:59.125 256+0 records out 00:14:59.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.310084 s, 3.4 MB/s 00:14:59.125 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:59.125 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:59.387 256+0 records in 00:14:59.387 256+0 records out 00:14:59.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204637 s, 5.1 MB/s 00:14:59.387 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:59.387 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:59.650 256+0 records in 00:14:59.650 256+0 records out 00:14:59.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245933 s, 4.3 MB/s 00:14:59.650 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:59.650 05:04:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:59.912 256+0 records in 00:14:59.912 256+0 records out 00:14:59.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243206 s, 4.3 MB/s 00:14:59.912 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:59.912 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:00.174 256+0 records in 00:15:00.174 256+0 records out 00:15:00.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.207475 s, 5.1 MB/s 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.174 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:00.436 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.436 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.436 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.436 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.436 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.436 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.436 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:00.436 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.436 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.436 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.698 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:00.960 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:00.960 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.960 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.960 05:04:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:00.960 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:00.960 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:00.960 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:00.960 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.960 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.960 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:00.960 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:00.960 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.960 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.960 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:01.222 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:01.222 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:01.222 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:01.222 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.222 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.222 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:01.222 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:01.222 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.222 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.222 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:01.483 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:01.483 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:01.483 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:01.483 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.483 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.483 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:01.483 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:01.483 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.483 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:01.483 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.483 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:01.743 05:04:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:02.002 malloc_lvol_verify 00:15:02.002 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:02.002 b99159bf-662f-4db1-94f7-e99984eae848 00:15:02.002 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:02.262 c3be3ddb-b499-47cf-898e-84f38a4936ae 00:15:02.262 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:02.522 /dev/nbd0 00:15:02.522 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:02.522 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:02.522 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:02.522 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:02.522 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:02.522 mke2fs 1.47.0 (5-Feb-2023) 00:15:02.522 Discarding device blocks: 0/4096 done 00:15:02.522 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:02.522 00:15:02.522 Allocating group tables: 0/1 done 00:15:02.522 Writing inode tables: 0/1 done 00:15:02.522 Creating journal (1024 blocks): done 00:15:02.522 Writing superblocks and filesystem accounting information: 0/1 done 00:15:02.522 00:15:02.522 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:02.522 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:02.522 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:02.522 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:02.522 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:02.522 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:02.522 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 83412 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 83412 ']' 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 83412 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83412 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:02.783 killing process with pid 83412 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83412' 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 83412 00:15:02.783 05:04:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 83412 00:15:02.783 05:04:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:02.783 00:15:02.783 real 0m10.177s 00:15:02.783 user 0m13.791s 00:15:02.783 sys 0m3.695s 00:15:02.783 05:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.783 05:04:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:03.045 ************************************ 00:15:03.045 END TEST bdev_nbd 00:15:03.045 ************************************ 00:15:03.045 05:04:32 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:03.045 05:04:32 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:03.045 05:04:32 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:03.045 05:04:32 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:03.045 05:04:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:03.045 05:04:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.045 05:04:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:03.045 ************************************ 00:15:03.045 START TEST bdev_fio 00:15:03.045 ************************************ 00:15:03.045 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:03.045 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n2]' 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n2 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n3]' 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n3 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:03.046 ************************************ 00:15:03.046 START TEST bdev_fio_rw_verify 00:15:03.046 ************************************ 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:03.046 05:04:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:03.306 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:03.307 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:03.307 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:03.307 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:03.307 job_nvme3n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:03.307 job_nvme3n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:03.307 fio-3.35 00:15:03.307 Starting 6 threads 00:15:15.539 00:15:15.539 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=83812: Thu Nov 28 05:04:42 2024 00:15:15.540 read: IOPS=13.6k, BW=53.3MiB/s (55.9MB/s)(533MiB/10001msec) 00:15:15.540 slat (usec): min=2, max=2442, avg= 7.22, stdev=17.41 00:15:15.540 clat (usec): min=98, max=10585, avg=1457.32, stdev=802.10 00:15:15.540 lat (usec): min=108, max=10603, avg=1464.54, stdev=802.71 00:15:15.540 clat percentiles (usec): 00:15:15.540 | 50.000th=[ 1352], 99.000th=[ 3982], 99.900th=[ 5473], 99.990th=[ 8291], 00:15:15.540 | 99.999th=[10552] 00:15:15.540 write: IOPS=13.9k, BW=54.5MiB/s (57.1MB/s)(545MiB/10001msec); 0 zone resets 00:15:15.540 slat (usec): min=13, max=5811, avg=41.86, stdev=142.56 00:15:15.540 clat (usec): min=89, max=9490, avg=1677.57, stdev=827.08 00:15:15.540 lat (usec): min=102, max=9522, avg=1719.42, stdev=839.38 00:15:15.540 clat percentiles (usec): 00:15:15.540 | 50.000th=[ 1549], 99.000th=[ 4228], 99.900th=[ 5604], 99.990th=[ 7570], 00:15:15.540 | 99.999th=[ 8160] 00:15:15.540 bw ( KiB/s): min=48254, max=73832, per=100.00%, avg=56028.37, stdev=1311.77, samples=114 00:15:15.540 iops : min=12062, max=18458, avg=14006.68, stdev=327.96, samples=114 00:15:15.540 lat (usec) : 100=0.01%, 250=1.42%, 500=5.24%, 750=7.54%, 1000=10.74% 00:15:15.540 lat (msec) : 2=50.17%, 4=23.67%, 10=1.22%, 20=0.01% 00:15:15.540 cpu : usr=44.99%, sys=30.92%, ctx=4862, majf=0, minf=15939 00:15:15.540 IO depths : 1=11.6%, 2=24.0%, 4=51.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:15.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.540 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.540 issued rwts: total=136482,139458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.540 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:15.540 00:15:15.540 Run status group 0 (all jobs): 00:15:15.540 READ: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=533MiB (559MB), run=10001-10001msec 00:15:15.540 WRITE: bw=54.5MiB/s (57.1MB/s), 54.5MiB/s-54.5MiB/s (57.1MB/s-57.1MB/s), io=545MiB (571MB), run=10001-10001msec 00:15:15.540 ----------------------------------------------------- 00:15:15.540 Suppressions used: 00:15:15.540 count bytes template 00:15:15.540 6 48 /usr/src/fio/parse.c 00:15:15.540 2900 278400 /usr/src/fio/iolog.c 00:15:15.540 1 8 libtcmalloc_minimal.so 00:15:15.540 1 904 libcrypto.so 00:15:15.540 ----------------------------------------------------- 00:15:15.540 00:15:15.540 00:15:15.540 real 0m11.149s 00:15:15.540 user 0m27.696s 00:15:15.540 sys 0m18.865s 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:15.540 ************************************ 00:15:15.540 END TEST bdev_fio_rw_verify 00:15:15.540 ************************************ 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ba3eeef8-dc21-490a-b41f-9a9621bb84a8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ba3eeef8-dc21-490a-b41f-9a9621bb84a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e1e5b32c-c4c3-45bc-8a41-0f3b107b8d9d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e1e5b32c-c4c3-45bc-8a41-0f3b107b8d9d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4ad3ad7b-da56-4904-b313-0d51d3c21cd8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4ad3ad7b-da56-4904-b313-0d51d3c21cd8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "342388f1-52c2-433d-9970-3f153d4e9fa0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "342388f1-52c2-433d-9970-3f153d4e9fa0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n2",' ' "aliases": [' ' "b1c99600-f353-40d4-979e-934a969e4c51"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b1c99600-f353-40d4-979e-934a969e4c51",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n3",' ' "aliases": [' ' "caa06b25-4e09-4314-adee-9e6abe4a6e05"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "caa06b25-4e09-4314-adee-9e6abe4a6e05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:15.540 /home/vagrant/spdk_repo/spdk 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:15.540 00:15:15.540 real 0m11.319s 00:15:15.540 user 0m27.773s 00:15:15.540 sys 0m18.934s 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.540 05:04:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:15.540 ************************************ 00:15:15.540 END TEST bdev_fio 00:15:15.540 ************************************ 00:15:15.540 05:04:43 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:15.540 05:04:43 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:15.540 05:04:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:15.540 05:04:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.540 05:04:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.540 ************************************ 00:15:15.540 START TEST bdev_verify 00:15:15.540 ************************************ 00:15:15.540 05:04:43 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:15.541 [2024-11-28 05:04:43.577332] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:15.541 [2024-11-28 05:04:43.577482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83977 ] 00:15:15.541 [2024-11-28 05:04:43.723930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:15.541 [2024-11-28 05:04:43.753854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.541 [2024-11-28 05:04:43.753906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.541 Running I/O for 5 seconds... 00:15:17.052 26880.00 IOPS, 105.00 MiB/s [2024-11-28T05:04:47.280Z] 26032.00 IOPS, 101.69 MiB/s [2024-11-28T05:04:48.668Z] 25450.67 IOPS, 99.42 MiB/s [2024-11-28T05:04:49.241Z] 25504.00 IOPS, 99.62 MiB/s [2024-11-28T05:04:49.241Z] 25216.00 IOPS, 98.50 MiB/s 00:15:19.957 Latency(us) 00:15:19.957 [2024-11-28T05:04:49.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.957 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:19.957 Verification LBA range: start 0x0 length 0x20000 00:15:19.957 nvme0n1 : 5.04 1954.51 7.63 0.00 0.00 65355.14 10183.29 64931.05 00:15:19.957 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:19.957 Verification LBA range: start 0x20000 length 0x20000 00:15:19.957 nvme0n1 : 5.06 1946.21 7.60 0.00 0.00 65668.53 10939.47 64527.75 00:15:19.957 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:19.957 Verification LBA range: start 0x0 length 0xbd0bd 00:15:19.957 nvme1n1 : 5.06 2610.05 10.20 0.00 0.00 48813.65 5444.53 56058.49 00:15:19.957 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:19.957 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:19.957 nvme1n1 : 5.05 2679.45 10.47 0.00 0.00 47525.35 5293.29 57268.38 00:15:19.957 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:19.957 Verification LBA range: start 0x0 length 0xa0000 00:15:19.957 nvme2n1 : 5.07 1995.94 7.80 0.00 0.00 63663.64 9830.40 61301.37 00:15:19.957 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:19.957 Verification LBA range: start 0xa0000 length 0xa0000 00:15:19.957 nvme2n1 : 5.04 1980.21 7.74 0.00 0.00 64335.93 8217.21 62511.26 00:15:19.957 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:19.957 Verification LBA range: start 0x0 length 0x80000 00:15:19.957 nvme3n1 : 5.07 1969.92 7.70 0.00 0.00 64357.02 9830.40 68157.44 00:15:19.957 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:19.957 Verification LBA range: start 0x80000 length 0x80000 00:15:19.957 nvme3n1 : 5.06 1948.21 7.61 0.00 0.00 65280.99 7662.67 66947.54 00:15:19.957 Job: nvme3n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:19.957 Verification LBA range: start 0x0 length 0x80000 00:15:19.957 nvme3n2 : 5.07 1968.84 7.69 0.00 0.00 64311.67 8771.74 60494.77 00:15:19.957 Job: nvme3n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:19.957 Verification LBA range: start 0x80000 length 0x80000 00:15:19.957 nvme3n2 : 5.07 1945.02 7.60 0.00 0.00 65290.30 10032.05 56058.49 00:15:19.957 Job: nvme3n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:19.957 Verification LBA range: start 0x0 length 0x80000 00:15:19.957 nvme3n3 : 5.08 1990.54 7.78 0.00 0.00 63569.04 5091.64 64527.75 00:15:19.957 Job: nvme3n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:19.957 Verification LBA range: start 0x80000 length 0x80000 00:15:19.957 nvme3n3 : 5.06 1947.45 7.61 0.00 0.00 65108.50 5192.47 62107.96 00:15:19.957 [2024-11-28T05:04:49.241Z] =================================================================================================================== 00:15:19.957 [2024-11-28T05:04:49.241Z] Total : 24936.37 97.41 0.00 0.00 61184.30 5091.64 68157.44 00:15:20.219 00:15:20.219 real 0m5.854s 00:15:20.219 user 0m9.252s 00:15:20.219 sys 0m1.570s 00:15:20.219 ************************************ 00:15:20.219 END TEST bdev_verify 00:15:20.219 ************************************ 00:15:20.219 05:04:49 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.219 05:04:49 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:20.219 05:04:49 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:20.219 05:04:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:20.219 05:04:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.219 05:04:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.219 ************************************ 00:15:20.219 START TEST bdev_verify_big_io 00:15:20.219 ************************************ 00:15:20.219 05:04:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:20.480 [2024-11-28 05:04:49.503488] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:20.480 [2024-11-28 05:04:49.503625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84068 ] 00:15:20.480 [2024-11-28 05:04:49.649708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:20.480 [2024-11-28 05:04:49.680172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.480 [2024-11-28 05:04:49.680243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.741 Running I/O for 5 seconds... 00:15:26.864 2832.00 IOPS, 177.00 MiB/s [2024-11-28T05:04:56.410Z] 3172.00 IOPS, 198.25 MiB/s [2024-11-28T05:04:56.410Z] 3378.67 IOPS, 211.17 MiB/s 00:15:27.126 Latency(us) 00:15:27.126 [2024-11-28T05:04:56.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.126 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:27.126 Verification LBA range: start 0x0 length 0x2000 00:15:27.126 nvme0n1 : 6.12 47.09 2.94 0.00 0.00 2557808.46 496863.70 3097332.18 00:15:27.126 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:27.126 Verification LBA range: start 0x2000 length 0x2000 00:15:27.126 nvme0n1 : 5.60 145.83 9.11 0.00 0.00 853291.32 87515.77 1503496.66 00:15:27.126 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:27.126 Verification LBA range: start 0x0 length 0xbd0b 00:15:27.126 nvme1n1 : 5.85 109.43 6.84 0.00 0.00 1080144.50 28029.24 1303460.63 00:15:27.126 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:27.126 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:27.126 nvme1n1 : 5.69 191.36 11.96 0.00 0.00 639214.44 8469.27 813049.70 00:15:27.126 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:27.126 Verification LBA range: start 0x0 length 0xa000 00:15:27.126 nvme2n1 : 5.94 107.67 6.73 0.00 0.00 1042415.62 52428.80 1077613.49 00:15:27.126 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:27.126 Verification LBA range: start 0xa000 length 0xa000 00:15:27.126 nvme2n1 : 5.61 156.99 9.81 0.00 0.00 758859.73 11191.53 1064707.94 00:15:27.126 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:27.126 Verification LBA range: start 0x0 length 0x8000 00:15:27.126 nvme3n1 : 5.95 108.99 6.81 0.00 0.00 979370.48 52428.80 1122782.92 00:15:27.126 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:27.126 Verification LBA range: start 0x8000 length 0x8000 00:15:27.126 nvme3n1 : 5.61 142.58 8.91 0.00 0.00 811963.82 51622.20 916294.10 00:15:27.126 Job: nvme3n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:27.126 Verification LBA range: start 0x0 length 0x8000 00:15:27.126 nvme3n2 : 6.12 146.46 9.15 0.00 0.00 698666.87 21677.29 1148594.02 00:15:27.126 Job: nvme3n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:27.126 Verification LBA range: start 0x8000 length 0x8000 00:15:27.126 nvme3n2 : 5.76 152.76 9.55 0.00 0.00 738541.91 71787.13 1155046.79 00:15:27.126 Job: nvme3n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:27.126 Verification LBA range: start 0x0 length 0x8000 00:15:27.126 nvme3n3 : 6.34 272.39 17.02 0.00 0.00 362089.47 753.03 1387346.71 00:15:27.126 Job: nvme3n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:27.126 Verification LBA range: start 0x8000 length 0x8000 00:15:27.126 nvme3n3 : 5.78 177.15 11.07 0.00 0.00 631731.08 1216.20 813049.70 00:15:27.126 [2024-11-28T05:04:56.410Z] =================================================================================================================== 00:15:27.126 [2024-11-28T05:04:56.410Z] Total : 1758.68 109.92 0.00 0.00 773367.90 753.03 3097332.18 00:15:27.387 00:15:27.387 real 0m7.122s 00:15:27.387 user 0m13.131s 00:15:27.387 sys 0m0.435s 00:15:27.387 05:04:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.387 05:04:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.387 ************************************ 00:15:27.387 END TEST bdev_verify_big_io 00:15:27.387 ************************************ 00:15:27.387 05:04:56 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:27.387 05:04:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:27.387 05:04:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.387 05:04:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.387 ************************************ 00:15:27.387 START TEST bdev_write_zeroes 00:15:27.387 ************************************ 00:15:27.387 05:04:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:27.649 [2024-11-28 05:04:56.673931] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:27.649 [2024-11-28 05:04:56.674040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84171 ] 00:15:27.649 [2024-11-28 05:04:56.810885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.649 [2024-11-28 05:04:56.831407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.910 Running I/O for 1 seconds... 00:15:28.854 84832.00 IOPS, 331.38 MiB/s 00:15:28.854 Latency(us) 00:15:28.854 [2024-11-28T05:04:58.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.854 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:28.854 nvme0n1 : 1.01 13646.04 53.30 0.00 0.00 9369.99 4763.96 24601.21 00:15:28.854 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:28.854 nvme1n1 : 1.02 16044.19 62.67 0.00 0.00 7962.18 4486.70 19156.68 00:15:28.854 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:28.854 nvme2n1 : 1.01 13754.82 53.73 0.00 0.00 9281.97 4864.79 22383.06 00:15:28.854 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:28.854 nvme3n1 : 1.02 13612.17 53.17 0.00 0.00 9334.16 4511.90 21273.99 00:15:28.854 Job: nvme3n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:28.854 nvme3n2 : 1.02 13596.81 53.11 0.00 0.00 9338.05 4587.52 20971.52 00:15:28.854 Job: nvme3n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:28.854 nvme3n3 : 1.02 13707.30 53.54 0.00 0.00 9255.04 2734.87 22181.42 00:15:28.854 [2024-11-28T05:04:58.138Z] =================================================================================================================== 00:15:28.854 [2024-11-28T05:04:58.138Z] Total : 84361.33 329.54 0.00 0.00 9057.50 2734.87 24601.21 00:15:29.116 00:15:29.116 real 0m1.627s 00:15:29.116 user 0m1.018s 00:15:29.116 sys 0m0.420s 00:15:29.116 05:04:58 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.116 ************************************ 00:15:29.116 END TEST bdev_write_zeroes 00:15:29.116 ************************************ 00:15:29.116 05:04:58 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:29.116 05:04:58 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:29.116 05:04:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:29.116 05:04:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.116 05:04:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.116 ************************************ 00:15:29.116 START TEST bdev_json_nonenclosed 00:15:29.116 ************************************ 00:15:29.116 05:04:58 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:29.116 [2024-11-28 05:04:58.377063] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:29.116 [2024-11-28 05:04:58.377206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84210 ] 00:15:29.378 [2024-11-28 05:04:58.522865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.378 [2024-11-28 05:04:58.551194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.378 [2024-11-28 05:04:58.551290] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:29.378 [2024-11-28 05:04:58.551309] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:29.378 [2024-11-28 05:04:58.551322] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:29.378 00:15:29.378 real 0m0.320s 00:15:29.378 user 0m0.125s 00:15:29.378 sys 0m0.091s 00:15:29.378 05:04:58 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.378 ************************************ 00:15:29.378 END TEST bdev_json_nonenclosed 00:15:29.378 ************************************ 00:15:29.378 05:04:58 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:29.665 05:04:58 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:29.665 05:04:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:29.665 05:04:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.665 05:04:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.665 ************************************ 00:15:29.665 START TEST bdev_json_nonarray 00:15:29.665 ************************************ 00:15:29.665 05:04:58 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:29.665 [2024-11-28 05:04:58.771277] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:29.665 [2024-11-28 05:04:58.771601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84230 ] 00:15:29.665 [2024-11-28 05:04:58.920005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.941 [2024-11-28 05:04:58.950155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.941 [2024-11-28 05:04:58.950288] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:29.941 [2024-11-28 05:04:58.950306] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:29.941 [2024-11-28 05:04:58.950321] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:29.941 00:15:29.941 real 0m0.329s 00:15:29.941 user 0m0.134s 00:15:29.941 sys 0m0.091s 00:15:29.941 05:04:59 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.941 ************************************ 00:15:29.941 END TEST bdev_json_nonarray 00:15:29.941 ************************************ 00:15:29.941 05:04:59 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:29.941 05:04:59 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:15:29.941 05:04:59 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:15:29.941 05:04:59 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:15:29.941 05:04:59 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:29.941 05:04:59 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:29.941 05:04:59 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:29.941 05:04:59 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:29.941 05:04:59 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:29.941 05:04:59 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:29.941 05:04:59 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:29.941 05:04:59 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:29.941 05:04:59 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:30.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:37.099 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:37.100 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:37.100 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:37.100 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:37.100 00:15:37.100 real 0m49.708s 00:15:37.100 user 1m13.521s 00:15:37.100 sys 0m40.050s 00:15:37.100 05:05:06 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.100 ************************************ 00:15:37.100 END TEST blockdev_xnvme 00:15:37.100 ************************************ 00:15:37.100 05:05:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.100 05:05:06 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:37.100 05:05:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:37.100 05:05:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.100 05:05:06 -- common/autotest_common.sh@10 -- # set +x 00:15:37.100 ************************************ 00:15:37.100 START TEST ublk 00:15:37.100 ************************************ 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:37.100 * Looking for test storage... 00:15:37.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:37.100 05:05:06 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.100 05:05:06 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.100 05:05:06 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.100 05:05:06 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.100 05:05:06 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.100 05:05:06 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.100 05:05:06 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.100 05:05:06 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.100 05:05:06 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.100 05:05:06 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.100 05:05:06 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.100 05:05:06 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:37.100 05:05:06 ublk -- scripts/common.sh@345 -- # : 1 00:15:37.100 05:05:06 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.100 05:05:06 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.100 05:05:06 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:37.100 05:05:06 ublk -- scripts/common.sh@353 -- # local d=1 00:15:37.100 05:05:06 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.100 05:05:06 ublk -- scripts/common.sh@355 -- # echo 1 00:15:37.100 05:05:06 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.100 05:05:06 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:37.100 05:05:06 ublk -- scripts/common.sh@353 -- # local d=2 00:15:37.100 05:05:06 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.100 05:05:06 ublk -- scripts/common.sh@355 -- # echo 2 00:15:37.100 05:05:06 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.100 05:05:06 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.100 05:05:06 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.100 05:05:06 ublk -- scripts/common.sh@368 -- # return 0 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:37.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.100 --rc genhtml_branch_coverage=1 00:15:37.100 --rc genhtml_function_coverage=1 00:15:37.100 --rc genhtml_legend=1 00:15:37.100 --rc geninfo_all_blocks=1 00:15:37.100 --rc geninfo_unexecuted_blocks=1 00:15:37.100 00:15:37.100 ' 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:37.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.100 --rc genhtml_branch_coverage=1 00:15:37.100 --rc genhtml_function_coverage=1 00:15:37.100 --rc genhtml_legend=1 00:15:37.100 --rc geninfo_all_blocks=1 00:15:37.100 --rc geninfo_unexecuted_blocks=1 00:15:37.100 00:15:37.100 ' 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:37.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.100 --rc genhtml_branch_coverage=1 00:15:37.100 --rc genhtml_function_coverage=1 00:15:37.100 --rc genhtml_legend=1 00:15:37.100 --rc geninfo_all_blocks=1 00:15:37.100 --rc geninfo_unexecuted_blocks=1 00:15:37.100 00:15:37.100 ' 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:37.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.100 --rc genhtml_branch_coverage=1 00:15:37.100 --rc genhtml_function_coverage=1 00:15:37.100 --rc genhtml_legend=1 00:15:37.100 --rc geninfo_all_blocks=1 00:15:37.100 --rc geninfo_unexecuted_blocks=1 00:15:37.100 00:15:37.100 ' 00:15:37.100 05:05:06 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:37.100 05:05:06 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:37.100 05:05:06 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:37.100 05:05:06 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:37.100 05:05:06 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:37.100 05:05:06 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:37.100 05:05:06 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:37.100 05:05:06 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:37.100 05:05:06 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:37.100 05:05:06 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:37.100 05:05:06 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:37.100 05:05:06 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:37.100 05:05:06 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:37.100 05:05:06 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:37.100 05:05:06 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:37.100 05:05:06 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:37.100 05:05:06 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:37.100 05:05:06 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:37.100 05:05:06 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:37.100 05:05:06 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.100 05:05:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:37.100 ************************************ 00:15:37.100 START TEST test_save_ublk_config 00:15:37.100 ************************************ 00:15:37.100 05:05:06 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:15:37.100 05:05:06 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:37.100 05:05:06 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=84536 00:15:37.100 05:05:06 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:37.100 05:05:06 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:37.100 05:05:06 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 84536 00:15:37.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.100 05:05:06 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84536 ']' 00:15:37.100 05:05:06 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.100 05:05:06 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.100 05:05:06 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.100 05:05:06 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.100 05:05:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:37.362 [2024-11-28 05:05:06.397428] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:37.362 [2024-11-28 05:05:06.397578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84536 ] 00:15:37.362 [2024-11-28 05:05:06.545681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.362 [2024-11-28 05:05:06.574419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.303 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.303 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:38.303 05:05:07 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:38.303 05:05:07 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:38.303 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.303 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:38.303 [2024-11-28 05:05:07.243201] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:38.303 [2024-11-28 05:05:07.244171] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:38.303 malloc0 00:15:38.303 [2024-11-28 05:05:07.275316] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:38.303 [2024-11-28 05:05:07.275399] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:38.303 [2024-11-28 05:05:07.275413] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:38.303 [2024-11-28 05:05:07.275427] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:38.303 [2024-11-28 05:05:07.284297] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:38.303 [2024-11-28 05:05:07.284341] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:38.303 [2024-11-28 05:05:07.291220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:38.303 [2024-11-28 05:05:07.291343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:38.303 [2024-11-28 05:05:07.308211] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:38.303 0 00:15:38.303 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.303 05:05:07 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:38.303 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.303 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:38.564 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.564 05:05:07 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:38.564 "subsystems": [ 00:15:38.564 { 00:15:38.564 "subsystem": "fsdev", 00:15:38.564 "config": [ 00:15:38.564 { 00:15:38.564 "method": "fsdev_set_opts", 00:15:38.564 "params": { 00:15:38.564 "fsdev_io_pool_size": 65535, 00:15:38.564 "fsdev_io_cache_size": 256 00:15:38.564 } 00:15:38.564 } 00:15:38.564 ] 00:15:38.564 }, 00:15:38.564 { 00:15:38.564 "subsystem": "keyring", 00:15:38.564 "config": [] 00:15:38.564 }, 00:15:38.564 { 00:15:38.564 "subsystem": "iobuf", 00:15:38.564 "config": [ 00:15:38.564 { 00:15:38.564 "method": "iobuf_set_options", 00:15:38.564 "params": { 00:15:38.564 "small_pool_count": 8192, 00:15:38.564 "large_pool_count": 1024, 00:15:38.564 "small_bufsize": 8192, 00:15:38.564 "large_bufsize": 135168, 00:15:38.564 "enable_numa": false 00:15:38.564 } 00:15:38.564 } 00:15:38.564 ] 00:15:38.564 }, 00:15:38.564 { 00:15:38.564 "subsystem": "sock", 00:15:38.564 "config": [ 00:15:38.564 { 00:15:38.564 "method": "sock_set_default_impl", 00:15:38.564 "params": { 00:15:38.564 "impl_name": "posix" 00:15:38.564 } 00:15:38.564 }, 00:15:38.564 { 00:15:38.564 "method": "sock_impl_set_options", 00:15:38.564 "params": { 00:15:38.564 "impl_name": "ssl", 00:15:38.564 "recv_buf_size": 4096, 00:15:38.564 "send_buf_size": 4096, 00:15:38.564 "enable_recv_pipe": true, 00:15:38.564 "enable_quickack": false, 00:15:38.564 "enable_placement_id": 0, 00:15:38.564 "enable_zerocopy_send_server": true, 00:15:38.564 "enable_zerocopy_send_client": false, 00:15:38.564 "zerocopy_threshold": 0, 00:15:38.564 "tls_version": 0, 00:15:38.564 "enable_ktls": false 00:15:38.564 } 00:15:38.564 }, 00:15:38.564 { 00:15:38.564 "method": "sock_impl_set_options", 00:15:38.564 "params": { 00:15:38.564 "impl_name": "posix", 00:15:38.564 "recv_buf_size": 2097152, 00:15:38.564 "send_buf_size": 2097152, 00:15:38.564 "enable_recv_pipe": true, 00:15:38.564 "enable_quickack": false, 00:15:38.564 "enable_placement_id": 0, 00:15:38.564 "enable_zerocopy_send_server": true, 00:15:38.564 "enable_zerocopy_send_client": false, 00:15:38.564 "zerocopy_threshold": 0, 00:15:38.564 "tls_version": 0, 00:15:38.564 "enable_ktls": false 00:15:38.564 } 00:15:38.564 } 00:15:38.564 ] 00:15:38.564 }, 00:15:38.564 { 00:15:38.564 "subsystem": "vmd", 00:15:38.564 "config": [] 00:15:38.564 }, 00:15:38.564 { 00:15:38.564 "subsystem": "accel", 00:15:38.564 "config": [ 00:15:38.564 { 00:15:38.564 "method": "accel_set_options", 00:15:38.564 "params": { 00:15:38.564 "small_cache_size": 128, 00:15:38.564 "large_cache_size": 16, 00:15:38.564 "task_count": 2048, 00:15:38.564 "sequence_count": 2048, 00:15:38.564 "buf_count": 2048 00:15:38.564 } 00:15:38.564 } 00:15:38.564 ] 00:15:38.564 }, 00:15:38.565 { 00:15:38.565 "subsystem": "bdev", 00:15:38.565 "config": [ 00:15:38.565 { 00:15:38.565 "method": "bdev_set_options", 00:15:38.565 "params": { 00:15:38.565 "bdev_io_pool_size": 65535, 00:15:38.565 "bdev_io_cache_size": 256, 00:15:38.565 "bdev_auto_examine": true, 00:15:38.565 "iobuf_small_cache_size": 128, 00:15:38.565 "iobuf_large_cache_size": 16 00:15:38.565 } 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "method": "bdev_raid_set_options", 00:15:38.565 "params": { 00:15:38.565 "process_window_size_kb": 1024, 00:15:38.565 "process_max_bandwidth_mb_sec": 0 00:15:38.565 } 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "method": "bdev_iscsi_set_options", 00:15:38.565 "params": { 00:15:38.565 "timeout_sec": 30 00:15:38.565 } 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "method": "bdev_nvme_set_options", 00:15:38.565 "params": { 00:15:38.565 "action_on_timeout": "none", 00:15:38.565 "timeout_us": 0, 00:15:38.565 "timeout_admin_us": 0, 00:15:38.565 "keep_alive_timeout_ms": 10000, 00:15:38.565 "arbitration_burst": 0, 00:15:38.565 "low_priority_weight": 0, 00:15:38.565 "medium_priority_weight": 0, 00:15:38.565 "high_priority_weight": 0, 00:15:38.565 "nvme_adminq_poll_period_us": 10000, 00:15:38.565 "nvme_ioq_poll_period_us": 0, 00:15:38.565 "io_queue_requests": 0, 00:15:38.565 "delay_cmd_submit": true, 00:15:38.565 "transport_retry_count": 4, 00:15:38.565 "bdev_retry_count": 3, 00:15:38.565 "transport_ack_timeout": 0, 00:15:38.565 "ctrlr_loss_timeout_sec": 0, 00:15:38.565 "reconnect_delay_sec": 0, 00:15:38.565 "fast_io_fail_timeout_sec": 0, 00:15:38.565 "disable_auto_failback": false, 00:15:38.565 "generate_uuids": false, 00:15:38.565 "transport_tos": 0, 00:15:38.565 "nvme_error_stat": false, 00:15:38.565 "rdma_srq_size": 0, 00:15:38.565 "io_path_stat": false, 00:15:38.565 "allow_accel_sequence": false, 00:15:38.565 "rdma_max_cq_size": 0, 00:15:38.565 "rdma_cm_event_timeout_ms": 0, 00:15:38.565 "dhchap_digests": [ 00:15:38.565 "sha256", 00:15:38.565 "sha384", 00:15:38.565 "sha512" 00:15:38.565 ], 00:15:38.565 "dhchap_dhgroups": [ 00:15:38.565 "null", 00:15:38.565 "ffdhe2048", 00:15:38.565 "ffdhe3072", 00:15:38.565 "ffdhe4096", 00:15:38.565 "ffdhe6144", 00:15:38.565 "ffdhe8192" 00:15:38.565 ] 00:15:38.565 } 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "method": "bdev_nvme_set_hotplug", 00:15:38.565 "params": { 00:15:38.565 "period_us": 100000, 00:15:38.565 "enable": false 00:15:38.565 } 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "method": "bdev_malloc_create", 00:15:38.565 "params": { 00:15:38.565 "name": "malloc0", 00:15:38.565 "num_blocks": 8192, 00:15:38.565 "block_size": 4096, 00:15:38.565 "physical_block_size": 4096, 00:15:38.565 "uuid": "f7107b31-768c-48ad-b9bb-9ae93e3a57d5", 00:15:38.565 "optimal_io_boundary": 0, 00:15:38.565 "md_size": 0, 00:15:38.565 "dif_type": 0, 00:15:38.565 "dif_is_head_of_md": false, 00:15:38.565 "dif_pi_format": 0 00:15:38.565 } 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "method": "bdev_wait_for_examine" 00:15:38.565 } 00:15:38.565 ] 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "subsystem": "scsi", 00:15:38.565 "config": null 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "subsystem": "scheduler", 00:15:38.565 "config": [ 00:15:38.565 { 00:15:38.565 "method": "framework_set_scheduler", 00:15:38.565 "params": { 00:15:38.565 "name": "static" 00:15:38.565 } 00:15:38.565 } 00:15:38.565 ] 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "subsystem": "vhost_scsi", 00:15:38.565 "config": [] 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "subsystem": "vhost_blk", 00:15:38.565 "config": [] 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "subsystem": "ublk", 00:15:38.565 "config": [ 00:15:38.565 { 00:15:38.565 "method": "ublk_create_target", 00:15:38.565 "params": { 00:15:38.565 "cpumask": "1" 00:15:38.565 } 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "method": "ublk_start_disk", 00:15:38.565 "params": { 00:15:38.565 "bdev_name": "malloc0", 00:15:38.565 "ublk_id": 0, 00:15:38.565 "num_queues": 1, 00:15:38.565 "queue_depth": 128 00:15:38.565 } 00:15:38.565 } 00:15:38.565 ] 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "subsystem": "nbd", 00:15:38.565 "config": [] 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "subsystem": "nvmf", 00:15:38.565 "config": [ 00:15:38.565 { 00:15:38.565 "method": "nvmf_set_config", 00:15:38.565 "params": { 00:15:38.565 "discovery_filter": "match_any", 00:15:38.565 "admin_cmd_passthru": { 00:15:38.565 "identify_ctrlr": false 00:15:38.565 }, 00:15:38.565 "dhchap_digests": [ 00:15:38.565 "sha256", 00:15:38.565 "sha384", 00:15:38.565 "sha512" 00:15:38.565 ], 00:15:38.565 "dhchap_dhgroups": [ 00:15:38.565 "null", 00:15:38.565 "ffdhe2048", 00:15:38.565 "ffdhe3072", 00:15:38.565 "ffdhe4096", 00:15:38.565 "ffdhe6144", 00:15:38.565 "ffdhe8192" 00:15:38.565 ] 00:15:38.565 } 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "method": "nvmf_set_max_subsystems", 00:15:38.565 "params": { 00:15:38.565 "max_subsystems": 1024 00:15:38.565 } 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "method": "nvmf_set_crdt", 00:15:38.565 "params": { 00:15:38.565 "crdt1": 0, 00:15:38.565 "crdt2": 0, 00:15:38.565 "crdt3": 0 00:15:38.565 } 00:15:38.565 } 00:15:38.565 ] 00:15:38.565 }, 00:15:38.565 { 00:15:38.565 "subsystem": "iscsi", 00:15:38.565 "config": [ 00:15:38.565 { 00:15:38.565 "method": "iscsi_set_options", 00:15:38.565 "params": { 00:15:38.565 "node_base": "iqn.2016-06.io.spdk", 00:15:38.565 "max_sessions": 128, 00:15:38.565 "max_connections_per_session": 2, 00:15:38.565 "max_queue_depth": 64, 00:15:38.565 "default_time2wait": 2, 00:15:38.565 "default_time2retain": 20, 00:15:38.565 "first_burst_length": 8192, 00:15:38.565 "immediate_data": true, 00:15:38.565 "allow_duplicated_isid": false, 00:15:38.565 "error_recovery_level": 0, 00:15:38.565 "nop_timeout": 60, 00:15:38.565 "nop_in_interval": 30, 00:15:38.565 "disable_chap": false, 00:15:38.565 "require_chap": false, 00:15:38.565 "mutual_chap": false, 00:15:38.565 "chap_group": 0, 00:15:38.565 "max_large_datain_per_connection": 64, 00:15:38.565 "max_r2t_per_connection": 4, 00:15:38.565 "pdu_pool_size": 36864, 00:15:38.565 "immediate_data_pool_size": 16384, 00:15:38.565 "data_out_pool_size": 2048 00:15:38.565 } 00:15:38.565 } 00:15:38.565 ] 00:15:38.565 } 00:15:38.565 ] 00:15:38.565 }' 00:15:38.565 05:05:07 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 84536 00:15:38.565 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84536 ']' 00:15:38.565 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84536 00:15:38.565 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:38.565 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.565 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84536 00:15:38.565 killing process with pid 84536 00:15:38.565 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.565 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.565 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84536' 00:15:38.565 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84536 00:15:38.565 05:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84536 00:15:38.827 [2024-11-28 05:05:07.915711] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:38.827 [2024-11-28 05:05:07.948325] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:38.827 [2024-11-28 05:05:07.948476] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:38.827 [2024-11-28 05:05:07.955221] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:38.827 [2024-11-28 05:05:07.955293] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:38.827 [2024-11-28 05:05:07.955302] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:38.827 [2024-11-28 05:05:07.955336] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:38.827 [2024-11-28 05:05:07.955492] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:39.399 05:05:08 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=84574 00:15:39.399 05:05:08 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 84574 00:15:39.399 05:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84574 ']' 00:15:39.399 05:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.399 05:05:08 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:39.399 05:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.399 05:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.399 05:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.399 05:05:08 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:39.399 "subsystems": [ 00:15:39.399 { 00:15:39.399 "subsystem": "fsdev", 00:15:39.399 "config": [ 00:15:39.399 { 00:15:39.399 "method": "fsdev_set_opts", 00:15:39.399 "params": { 00:15:39.399 "fsdev_io_pool_size": 65535, 00:15:39.399 "fsdev_io_cache_size": 256 00:15:39.399 } 00:15:39.399 } 00:15:39.399 ] 00:15:39.399 }, 00:15:39.399 { 00:15:39.399 "subsystem": "keyring", 00:15:39.399 "config": [] 00:15:39.399 }, 00:15:39.399 { 00:15:39.399 "subsystem": "iobuf", 00:15:39.399 "config": [ 00:15:39.399 { 00:15:39.399 "method": "iobuf_set_options", 00:15:39.399 "params": { 00:15:39.399 "small_pool_count": 8192, 00:15:39.399 "large_pool_count": 1024, 00:15:39.399 "small_bufsize": 8192, 00:15:39.399 "large_bufsize": 135168, 00:15:39.399 "enable_numa": false 00:15:39.399 } 00:15:39.399 } 00:15:39.399 ] 00:15:39.399 }, 00:15:39.399 { 00:15:39.399 "subsystem": "sock", 00:15:39.399 "config": [ 00:15:39.399 { 00:15:39.399 "method": "sock_set_default_impl", 00:15:39.399 "params": { 00:15:39.400 "impl_name": "posix" 00:15:39.400 } 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "method": "sock_impl_set_options", 00:15:39.400 "params": { 00:15:39.400 "impl_name": "ssl", 00:15:39.400 "recv_buf_size": 4096, 00:15:39.400 "send_buf_size": 4096, 00:15:39.400 "enable_recv_pipe": true, 00:15:39.400 "enable_quickack": false, 00:15:39.400 "enable_placement_id": 0, 00:15:39.400 "enable_zerocopy_send_server": true, 00:15:39.400 "enable_zerocopy_send_client": false, 00:15:39.400 "zerocopy_threshold": 0, 00:15:39.400 "tls_version": 0, 00:15:39.400 "enable_ktls": false 00:15:39.400 } 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "method": "sock_impl_set_options", 00:15:39.400 "params": { 00:15:39.400 "impl_name": "posix", 00:15:39.400 "recv_buf_size": 2097152, 00:15:39.400 "send_buf_size": 2097152, 00:15:39.400 "enable_recv_pipe": true, 00:15:39.400 "enable_quickack": false, 00:15:39.400 "enable_placement_id": 0, 00:15:39.400 "enable_zerocopy_send_server": true, 00:15:39.400 "enable_zerocopy_send_client": false, 00:15:39.400 "zerocopy_threshold": 0, 00:15:39.400 "tls_version": 0, 00:15:39.400 "enable_ktls": false 00:15:39.400 } 00:15:39.400 } 00:15:39.400 ] 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "subsystem": "vmd", 00:15:39.400 "config": [] 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "subsystem": "accel", 00:15:39.400 "config": [ 00:15:39.400 { 00:15:39.400 "method": "accel_set_options", 00:15:39.400 "params": { 00:15:39.400 "small_cache_size": 128, 00:15:39.400 "large_cache_size": 16, 00:15:39.400 "task_count": 2048, 00:15:39.400 "sequence_count": 2048, 00:15:39.400 "buf_count": 2048 00:15:39.400 } 00:15:39.400 } 00:15:39.400 ] 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "subsystem": "bdev", 00:15:39.400 "config": [ 00:15:39.400 { 00:15:39.400 "method": "bdev_set_options", 00:15:39.400 "params": { 00:15:39.400 "bdev_io_pool_size": 65535, 00:15:39.400 "bdev_io_cache_size": 256, 00:15:39.400 "bdev_auto_examine": true, 00:15:39.400 "iobuf_small_cache_size": 128, 00:15:39.400 "iobuf_large_cache_size": 16 00:15:39.400 } 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "method": "bdev_raid_set_options", 00:15:39.400 "params": { 00:15:39.400 "process_window_size_kb": 1024, 00:15:39.400 "process_max_bandwidth_mb_sec": 0 00:15:39.400 } 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "method": "bdev_iscsi_set_options", 00:15:39.400 "params": { 00:15:39.400 "timeout_sec": 30 00:15:39.400 } 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "method": "bdev_nvme_set_options", 00:15:39.400 "params": { 00:15:39.400 "action_on_timeout": "none", 00:15:39.400 "timeout_us": 0, 00:15:39.400 "timeout_admin_us": 0, 00:15:39.400 "keep_alive_timeout_ms": 10000, 00:15:39.400 "arbitration_burst": 0, 00:15:39.400 "low_priority_weight": 0, 00:15:39.400 "medium_priority_weight": 0, 00:15:39.400 "high_priority_weight": 0, 00:15:39.400 "nvme_adminq_poll_period_us": 10000, 00:15:39.400 "nvme_ioq_poll_period_us": 0, 00:15:39.400 "io_queue_requests": 0, 00:15:39.400 "delay_cmd_submit": true, 00:15:39.400 "transport_retry_count": 4, 00:15:39.400 "bdev_retry_count": 3, 00:15:39.400 "transport_ack_timeout": 0, 00:15:39.400 "ctrlr_loss_timeout_sec": 0, 00:15:39.400 "reconnect_delay_sec": 0, 00:15:39.400 "fast_io_fail_timeout_sec": 0, 00:15:39.400 "disable_auto_failback": false, 00:15:39.400 "generate_uuids": false, 00:15:39.400 "transport_tos": 0, 00:15:39.400 "nvme_error_stat": false, 00:15:39.400 "rdma_srq_size": 0, 00:15:39.400 "io_path_stat": false, 00:15:39.400 "allow_accel_sequence": false, 00:15:39.400 "rdma_max_cq_size": 0, 00:15:39.400 "rdma_cm_event_timeout_ms": 0, 00:15:39.400 "dhchap_digests": [ 00:15:39.400 "sha256", 00:15:39.400 "sha384", 00:15:39.400 "sha512" 00:15:39.400 ], 00:15:39.400 "dhchap_dhgroups": [ 00:15:39.400 "null", 00:15:39.400 "ffdhe2048", 00:15:39.400 "ffdhe3072", 00:15:39.400 "ffdhe4096", 00:15:39.400 "ffdhe6144", 00:15:39.400 "ffdhe8192" 00:15:39.400 ] 00:15:39.400 } 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "method": "bdev_nvme_set_hotplug", 00:15:39.400 "params": { 00:15:39.400 "period_us": 100000, 00:15:39.400 "enable": false 00:15:39.400 } 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "method": "bdev_malloc_create", 00:15:39.400 "params": { 00:15:39.400 "name": "malloc0", 00:15:39.400 "num_blocks": 8192, 00:15:39.400 "block_size": 4096, 00:15:39.400 "physical_block_size": 4096, 00:15:39.400 "uuid": "f7107b31-768c-48ad-b9bb-9ae93e3a57d5", 00:15:39.400 "optimal_io_boundary": 0, 00:15:39.400 "md_size": 0, 00:15:39.400 "dif_type": 0, 00:15:39.400 "dif_is_head_of_md": false, 00:15:39.400 "dif_pi_format": 0 00:15:39.400 } 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "method": "bdev_wait_for_examine" 00:15:39.400 } 00:15:39.400 ] 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "subsystem": "scsi", 00:15:39.400 "config": null 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "subsystem": "scheduler", 00:15:39.400 "config": [ 00:15:39.400 { 00:15:39.400 "method": "framework_set_scheduler", 00:15:39.400 "params": { 00:15:39.400 "name": "static" 00:15:39.400 } 00:15:39.400 } 00:15:39.400 ] 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "subsystem": "vhost_scsi", 00:15:39.400 "config": [] 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "subsystem": "vhost_blk", 00:15:39.400 "config": [] 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "subsystem": "ublk", 00:15:39.400 "config": [ 00:15:39.400 { 00:15:39.400 "method": "ublk_create_target", 00:15:39.400 "params": { 00:15:39.400 "cpumask": "1" 00:15:39.400 } 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "method": "ublk_start_disk", 00:15:39.400 "params": { 00:15:39.400 "bdev_name": "malloc0", 00:15:39.400 "ublk_id": 0, 00:15:39.400 "num_queues": 1, 00:15:39.400 "queue_depth": 128 00:15:39.400 } 00:15:39.400 } 00:15:39.400 ] 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "subsystem": "nbd", 00:15:39.400 "config": [] 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "subsystem": "nvmf", 00:15:39.400 "config": [ 00:15:39.400 { 00:15:39.400 "method": "nvmf_set_config", 00:15:39.400 "params": { 00:15:39.400 "discovery_filter": "match_any", 00:15:39.400 "admin_cmd_passthru": { 00:15:39.400 "identify_ctrlr": false 00:15:39.400 }, 00:15:39.400 "dhchap_digests": [ 00:15:39.400 "sha256", 00:15:39.400 "sha384", 00:15:39.400 "sha512" 00:15:39.400 ], 00:15:39.400 "dhchap_dhgroups": [ 00:15:39.400 "null", 00:15:39.400 "ffdhe2048", 00:15:39.400 "ffdhe3072", 00:15:39.400 "ffdhe4096", 00:15:39.400 "ffdhe6144", 00:15:39.400 "ffdhe8192" 00:15:39.400 ] 00:15:39.400 } 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "method": "nvmf_set_max_subsystems", 00:15:39.400 "params": { 00:15:39.400 "max_subsystems": 1024 00:15:39.400 } 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "method": "nvmf_set_crdt", 00:15:39.400 "params": { 00:15:39.400 "crdt1": 0, 00:15:39.400 "crdt2": 0, 00:15:39.400 "crdt3": 0 00:15:39.400 } 00:15:39.400 } 00:15:39.400 ] 00:15:39.400 }, 00:15:39.400 { 00:15:39.400 "subsystem": "iscsi", 00:15:39.400 "config": [ 00:15:39.400 { 00:15:39.400 "method": "iscsi_set_options", 00:15:39.400 "params": { 00:15:39.400 "node_base": "iqn.2016-06.io.spdk", 00:15:39.400 "max_sessions": 128, 00:15:39.400 "max_connections_per_session": 2, 00:15:39.400 "max_queue_depth": 64, 00:15:39.400 "default_time2wait": 2, 00:15:39.400 "default_time2retain": 20, 00:15:39.400 "first_burst_length": 8192, 00:15:39.400 "immediate_data": true, 00:15:39.400 "allow_duplicated_isid": false, 00:15:39.400 "error_recovery_level": 0, 00:15:39.400 "nop_timeout": 60, 00:15:39.400 "nop_in_interval": 30, 00:15:39.400 "disable_chap": false, 00:15:39.400 "require_chap": false, 00:15:39.400 "mutual_chap": false, 00:15:39.400 "chap_group": 0, 00:15:39.400 "max_large_datain_per_connection": 64, 00:15:39.400 "max_r2t_per_connection": 4, 00:15:39.400 "pdu_pool_size": 36864, 00:15:39.400 "immediate_data_pool_size": 16384, 00:15:39.400 "data_out_pool_size": 2048 00:15:39.400 } 00:15:39.400 } 00:15:39.400 ] 00:15:39.400 } 00:15:39.400 ] 00:15:39.400 }' 00:15:39.400 05:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:39.400 [2024-11-28 05:05:08.504019] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:39.400 [2024-11-28 05:05:08.504457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84574 ] 00:15:39.400 [2024-11-28 05:05:08.653225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.662 [2024-11-28 05:05:08.682014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.922 [2024-11-28 05:05:09.049203] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:39.922 [2024-11-28 05:05:09.049561] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:39.922 [2024-11-28 05:05:09.057369] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:39.922 [2024-11-28 05:05:09.057457] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:39.922 [2024-11-28 05:05:09.057466] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:39.922 [2024-11-28 05:05:09.057477] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:39.922 [2024-11-28 05:05:09.066303] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:39.922 [2024-11-28 05:05:09.066336] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:39.922 [2024-11-28 05:05:09.073234] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:39.922 [2024-11-28 05:05:09.073358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:39.922 [2024-11-28 05:05:09.090208] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 84574 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84574 ']' 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84574 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84574 00:15:40.183 killing process with pid 84574 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84574' 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84574 00:15:40.183 05:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84574 00:15:40.756 [2024-11-28 05:05:09.729414] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:40.756 [2024-11-28 05:05:09.769222] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:40.756 [2024-11-28 05:05:09.769362] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:40.756 [2024-11-28 05:05:09.777215] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:40.756 [2024-11-28 05:05:09.777275] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:40.756 [2024-11-28 05:05:09.777291] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:40.756 [2024-11-28 05:05:09.777322] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:40.756 [2024-11-28 05:05:09.777472] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:41.018 05:05:10 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:41.018 ************************************ 00:15:41.018 END TEST test_save_ublk_config 00:15:41.018 ************************************ 00:15:41.018 00:15:41.018 real 0m3.929s 00:15:41.018 user 0m2.684s 00:15:41.018 sys 0m1.927s 00:15:41.018 05:05:10 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.018 05:05:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:41.018 05:05:10 ublk -- ublk/ublk.sh@139 -- # spdk_pid=84626 00:15:41.018 05:05:10 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:41.018 05:05:10 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:41.018 05:05:10 ublk -- ublk/ublk.sh@141 -- # waitforlisten 84626 00:15:41.018 05:05:10 ublk -- common/autotest_common.sh@835 -- # '[' -z 84626 ']' 00:15:41.018 05:05:10 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.018 05:05:10 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.018 05:05:10 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.018 05:05:10 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.018 05:05:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:41.280 [2024-11-28 05:05:10.361014] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:41.280 [2024-11-28 05:05:10.361150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84626 ] 00:15:41.280 [2024-11-28 05:05:10.506168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:41.280 [2024-11-28 05:05:10.536420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.280 [2024-11-28 05:05:10.536474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.224 05:05:11 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.224 05:05:11 ublk -- common/autotest_common.sh@868 -- # return 0 00:15:42.224 05:05:11 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:42.224 05:05:11 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:42.224 05:05:11 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.224 05:05:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.224 ************************************ 00:15:42.224 START TEST test_create_ublk 00:15:42.224 ************************************ 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.224 [2024-11-28 05:05:11.232206] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:42.224 [2024-11-28 05:05:11.234009] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.224 [2024-11-28 05:05:11.317804] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:42.224 [2024-11-28 05:05:11.318279] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:42.224 [2024-11-28 05:05:11.318308] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:42.224 [2024-11-28 05:05:11.318319] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:42.224 [2024-11-28 05:05:11.325245] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:42.224 [2024-11-28 05:05:11.325286] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:42.224 [2024-11-28 05:05:11.333227] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:42.224 [2024-11-28 05:05:11.333969] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:42.224 [2024-11-28 05:05:11.356211] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.224 05:05:11 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:42.224 { 00:15:42.224 "ublk_device": "/dev/ublkb0", 00:15:42.224 "id": 0, 00:15:42.224 "queue_depth": 512, 00:15:42.224 "num_queues": 4, 00:15:42.224 "bdev_name": "Malloc0" 00:15:42.224 } 00:15:42.224 ]' 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:42.224 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:42.484 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:42.484 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:42.484 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:42.484 05:05:11 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:42.484 05:05:11 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:42.484 05:05:11 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:42.484 05:05:11 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:42.484 05:05:11 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:42.484 05:05:11 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:42.484 05:05:11 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:42.484 05:05:11 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:42.484 05:05:11 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:42.484 05:05:11 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:42.484 05:05:11 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:42.484 05:05:11 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:42.484 fio: verification read phase will never start because write phase uses all of runtime 00:15:42.484 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:42.484 fio-3.35 00:15:42.484 Starting 1 process 00:15:54.692 00:15:54.692 fio_test: (groupid=0, jobs=1): err= 0: pid=84670: Thu Nov 28 05:05:21 2024 00:15:54.692 write: IOPS=13.5k, BW=52.8MiB/s (55.4MB/s)(528MiB/10001msec); 0 zone resets 00:15:54.692 clat (usec): min=35, max=8936, avg=73.22, stdev=172.21 00:15:54.693 lat (usec): min=35, max=8957, avg=73.63, stdev=172.24 00:15:54.693 clat percentiles (usec): 00:15:54.693 | 1.00th=[ 50], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 60], 00:15:54.693 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 65], 00:15:54.693 | 70.00th=[ 67], 80.00th=[ 69], 90.00th=[ 72], 95.00th=[ 77], 00:15:54.693 | 99.00th=[ 91], 99.50th=[ 188], 99.90th=[ 3556], 99.95th=[ 3982], 00:15:54.693 | 99.99th=[ 4228] 00:15:54.693 bw ( KiB/s): min=19584, max=60344, per=99.53%, avg=53842.53, stdev=12875.57, samples=19 00:15:54.693 iops : min= 4896, max=15086, avg=13460.63, stdev=3218.89, samples=19 00:15:54.693 lat (usec) : 50=0.98%, 100=98.31%, 250=0.36%, 500=0.05%, 750=0.01% 00:15:54.693 lat (usec) : 1000=0.01% 00:15:54.693 lat (msec) : 2=0.04%, 4=0.20%, 10=0.04% 00:15:54.693 cpu : usr=2.14%, sys=10.04%, ctx=135260, majf=0, minf=797 00:15:54.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.693 issued rwts: total=0,135248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.693 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.693 00:15:54.693 Run status group 0 (all jobs): 00:15:54.693 WRITE: bw=52.8MiB/s (55.4MB/s), 52.8MiB/s-52.8MiB/s (55.4MB/s-55.4MB/s), io=528MiB (554MB), run=10001-10001msec 00:15:54.693 00:15:54.693 Disk stats (read/write): 00:15:54.693 ublkb0: ios=0/133679, merge=0/0, ticks=0/8722, in_queue=8723, util=99.08% 00:15:54.693 05:05:21 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 [2024-11-28 05:05:21.798305] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:54.693 [2024-11-28 05:05:21.834227] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:54.693 [2024-11-28 05:05:21.834907] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:54.693 [2024-11-28 05:05:21.849194] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:54.693 [2024-11-28 05:05:21.849450] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:54.693 [2024-11-28 05:05:21.849458] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.693 05:05:21 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 [2024-11-28 05:05:21.860280] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:54.693 request: 00:15:54.693 { 00:15:54.693 "ublk_id": 0, 00:15:54.693 "method": "ublk_stop_disk", 00:15:54.693 "req_id": 1 00:15:54.693 } 00:15:54.693 Got JSON-RPC error response 00:15:54.693 response: 00:15:54.693 { 00:15:54.693 "code": -19, 00:15:54.693 "message": "No such device" 00:15:54.693 } 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:54.693 05:05:21 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 [2024-11-28 05:05:21.871271] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:54.693 [2024-11-28 05:05:21.876572] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:54.693 [2024-11-28 05:05:21.876600] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.693 05:05:21 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.693 05:05:21 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:54.693 05:05:21 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.693 05:05:21 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:54.693 05:05:21 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:54.693 05:05:21 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:54.693 05:05:21 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 05:05:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.693 05:05:21 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:54.693 05:05:21 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:54.693 05:05:22 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:54.693 00:15:54.693 real 0m10.797s 00:15:54.693 user 0m0.525s 00:15:54.693 sys 0m1.090s 00:15:54.693 ************************************ 00:15:54.693 END TEST test_create_ublk 00:15:54.693 ************************************ 00:15:54.693 05:05:22 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.693 05:05:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 05:05:22 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:54.693 05:05:22 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:54.693 05:05:22 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.693 05:05:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 ************************************ 00:15:54.693 START TEST test_create_multi_ublk 00:15:54.693 ************************************ 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 [2024-11-28 05:05:22.071196] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:54.693 [2024-11-28 05:05:22.072076] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 [2024-11-28 05:05:22.144316] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:54.693 [2024-11-28 05:05:22.144612] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:54.693 [2024-11-28 05:05:22.144620] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:54.693 [2024-11-28 05:05:22.144625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:54.693 [2024-11-28 05:05:22.168207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:54.693 [2024-11-28 05:05:22.168225] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:54.693 [2024-11-28 05:05:22.180203] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:54.693 [2024-11-28 05:05:22.180680] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:54.693 [2024-11-28 05:05:22.220214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.693 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.694 [2024-11-28 05:05:22.304289] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:54.694 [2024-11-28 05:05:22.304581] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:54.694 [2024-11-28 05:05:22.304588] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:54.694 [2024-11-28 05:05:22.304594] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:54.694 [2024-11-28 05:05:22.316208] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:54.694 [2024-11-28 05:05:22.316227] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:54.694 [2024-11-28 05:05:22.328199] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:54.694 [2024-11-28 05:05:22.328689] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:54.694 [2024-11-28 05:05:22.364204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.694 [2024-11-28 05:05:22.448289] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:54.694 [2024-11-28 05:05:22.448580] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:54.694 [2024-11-28 05:05:22.448589] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:54.694 [2024-11-28 05:05:22.448593] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:54.694 [2024-11-28 05:05:22.460210] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:54.694 [2024-11-28 05:05:22.460226] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:54.694 [2024-11-28 05:05:22.472203] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:54.694 [2024-11-28 05:05:22.472682] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:54.694 [2024-11-28 05:05:22.508212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.694 [2024-11-28 05:05:22.592286] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:54.694 [2024-11-28 05:05:22.592584] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:54.694 [2024-11-28 05:05:22.592591] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:54.694 [2024-11-28 05:05:22.592597] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:54.694 [2024-11-28 05:05:22.604210] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:54.694 [2024-11-28 05:05:22.604231] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:54.694 [2024-11-28 05:05:22.616202] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:54.694 [2024-11-28 05:05:22.616685] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:54.694 [2024-11-28 05:05:22.636203] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:54.694 { 00:15:54.694 "ublk_device": "/dev/ublkb0", 00:15:54.694 "id": 0, 00:15:54.694 "queue_depth": 512, 00:15:54.694 "num_queues": 4, 00:15:54.694 "bdev_name": "Malloc0" 00:15:54.694 }, 00:15:54.694 { 00:15:54.694 "ublk_device": "/dev/ublkb1", 00:15:54.694 "id": 1, 00:15:54.694 "queue_depth": 512, 00:15:54.694 "num_queues": 4, 00:15:54.694 "bdev_name": "Malloc1" 00:15:54.694 }, 00:15:54.694 { 00:15:54.694 "ublk_device": "/dev/ublkb2", 00:15:54.694 "id": 2, 00:15:54.694 "queue_depth": 512, 00:15:54.694 "num_queues": 4, 00:15:54.694 "bdev_name": "Malloc2" 00:15:54.694 }, 00:15:54.694 { 00:15:54.694 "ublk_device": "/dev/ublkb3", 00:15:54.694 "id": 3, 00:15:54.694 "queue_depth": 512, 00:15:54.694 "num_queues": 4, 00:15:54.694 "bdev_name": "Malloc3" 00:15:54.694 } 00:15:54.694 ]' 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.694 05:05:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:54.694 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.695 [2024-11-28 05:05:23.282261] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:54.695 [2024-11-28 05:05:23.322207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:54.695 [2024-11-28 05:05:23.323111] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:54.695 [2024-11-28 05:05:23.330226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:54.695 [2024-11-28 05:05:23.330479] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:54.695 [2024-11-28 05:05:23.330490] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.695 [2024-11-28 05:05:23.346267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:54.695 [2024-11-28 05:05:23.379721] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:54.695 [2024-11-28 05:05:23.380834] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:54.695 [2024-11-28 05:05:23.386204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:54.695 [2024-11-28 05:05:23.386439] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:54.695 [2024-11-28 05:05:23.386450] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.695 [2024-11-28 05:05:23.402268] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:54.695 [2024-11-28 05:05:23.449226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:54.695 [2024-11-28 05:05:23.450000] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:54.695 [2024-11-28 05:05:23.458202] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:54.695 [2024-11-28 05:05:23.458446] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:54.695 [2024-11-28 05:05:23.458458] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.695 [2024-11-28 05:05:23.466275] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:54.695 [2024-11-28 05:05:23.506229] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:54.695 [2024-11-28 05:05:23.506873] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:54.695 [2024-11-28 05:05:23.514207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:54.695 [2024-11-28 05:05:23.514444] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:54.695 [2024-11-28 05:05:23.514454] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:54.695 [2024-11-28 05:05:23.714252] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:54.695 [2024-11-28 05:05:23.715466] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:54.695 [2024-11-28 05:05:23.715500] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.695 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.954 05:05:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.954 05:05:23 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:54.954 05:05:23 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:54.954 05:05:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:54.954 05:05:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:54.954 05:05:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.954 05:05:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.954 05:05:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.954 05:05:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:54.954 05:05:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:54.954 ************************************ 00:15:54.954 END TEST test_create_multi_ublk 00:15:54.954 ************************************ 00:15:54.954 05:05:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:54.954 00:15:54.954 real 0m2.001s 00:15:54.954 user 0m0.794s 00:15:54.954 sys 0m0.127s 00:15:54.954 05:05:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.954 05:05:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.954 05:05:24 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:54.954 05:05:24 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:54.954 05:05:24 ublk -- ublk/ublk.sh@130 -- # killprocess 84626 00:15:54.954 05:05:24 ublk -- common/autotest_common.sh@954 -- # '[' -z 84626 ']' 00:15:54.954 05:05:24 ublk -- common/autotest_common.sh@958 -- # kill -0 84626 00:15:54.954 05:05:24 ublk -- common/autotest_common.sh@959 -- # uname 00:15:54.954 05:05:24 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.954 05:05:24 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84626 00:15:54.954 killing process with pid 84626 00:15:54.954 05:05:24 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.954 05:05:24 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.954 05:05:24 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84626' 00:15:54.954 05:05:24 ublk -- common/autotest_common.sh@973 -- # kill 84626 00:15:54.954 05:05:24 ublk -- common/autotest_common.sh@978 -- # wait 84626 00:15:55.212 [2024-11-28 05:05:24.275363] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:55.212 [2024-11-28 05:05:24.275426] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:55.470 00:15:55.470 real 0m18.399s 00:15:55.470 user 0m28.015s 00:15:55.470 sys 0m7.708s 00:15:55.470 05:05:24 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.470 ************************************ 00:15:55.470 END TEST ublk 00:15:55.470 ************************************ 00:15:55.470 05:05:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:55.470 05:05:24 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:55.470 05:05:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:55.470 05:05:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.470 05:05:24 -- common/autotest_common.sh@10 -- # set +x 00:15:55.470 ************************************ 00:15:55.470 START TEST ublk_recovery 00:15:55.470 ************************************ 00:15:55.470 05:05:24 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:55.470 * Looking for test storage... 00:15:55.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:55.470 05:05:24 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:55.470 05:05:24 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:55.470 05:05:24 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:15:55.470 05:05:24 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.470 05:05:24 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.471 05:05:24 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:15:55.471 05:05:24 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.471 05:05:24 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:55.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.471 --rc genhtml_branch_coverage=1 00:15:55.471 --rc genhtml_function_coverage=1 00:15:55.471 --rc genhtml_legend=1 00:15:55.471 --rc geninfo_all_blocks=1 00:15:55.471 --rc geninfo_unexecuted_blocks=1 00:15:55.471 00:15:55.471 ' 00:15:55.471 05:05:24 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:55.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.471 --rc genhtml_branch_coverage=1 00:15:55.471 --rc genhtml_function_coverage=1 00:15:55.471 --rc genhtml_legend=1 00:15:55.471 --rc geninfo_all_blocks=1 00:15:55.471 --rc geninfo_unexecuted_blocks=1 00:15:55.471 00:15:55.471 ' 00:15:55.471 05:05:24 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:55.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.471 --rc genhtml_branch_coverage=1 00:15:55.471 --rc genhtml_function_coverage=1 00:15:55.471 --rc genhtml_legend=1 00:15:55.471 --rc geninfo_all_blocks=1 00:15:55.471 --rc geninfo_unexecuted_blocks=1 00:15:55.471 00:15:55.471 ' 00:15:55.471 05:05:24 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:55.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.471 --rc genhtml_branch_coverage=1 00:15:55.471 --rc genhtml_function_coverage=1 00:15:55.471 --rc genhtml_legend=1 00:15:55.471 --rc geninfo_all_blocks=1 00:15:55.471 --rc geninfo_unexecuted_blocks=1 00:15:55.471 00:15:55.471 ' 00:15:55.471 05:05:24 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:55.471 05:05:24 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:55.471 05:05:24 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:55.471 05:05:24 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:55.471 05:05:24 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:55.471 05:05:24 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:55.471 05:05:24 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:55.471 05:05:24 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:55.471 05:05:24 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:55.471 05:05:24 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:55.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.471 05:05:24 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=84991 00:15:55.471 05:05:24 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:55.471 05:05:24 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 84991 00:15:55.471 05:05:24 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 84991 ']' 00:15:55.471 05:05:24 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.471 05:05:24 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:55.471 05:05:24 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.471 05:05:24 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.471 05:05:24 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.471 05:05:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.729 [2024-11-28 05:05:24.803569] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:55.729 [2024-11-28 05:05:24.803801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84991 ] 00:15:55.729 [2024-11-28 05:05:24.944080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:55.729 [2024-11-28 05:05:24.962856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.729 [2024-11-28 05:05:24.962891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.666 05:05:25 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.666 05:05:25 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:15:56.666 05:05:25 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:56.666 05:05:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.666 05:05:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.666 [2024-11-28 05:05:25.594197] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:56.666 [2024-11-28 05:05:25.595475] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:56.666 05:05:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.666 05:05:25 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:56.666 05:05:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.666 05:05:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.666 malloc0 00:15:56.666 05:05:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.666 05:05:25 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:56.666 05:05:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.666 05:05:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.666 [2024-11-28 05:05:25.634303] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:56.666 [2024-11-28 05:05:25.634388] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:56.666 [2024-11-28 05:05:25.634394] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:56.666 [2024-11-28 05:05:25.634402] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:56.666 [2024-11-28 05:05:25.642211] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:56.666 [2024-11-28 05:05:25.642237] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:56.666 [2024-11-28 05:05:25.650215] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:56.666 [2024-11-28 05:05:25.650347] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:56.666 [2024-11-28 05:05:25.665209] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:56.666 1 00:15:56.666 05:05:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.666 05:05:25 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:57.606 05:05:26 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=85019 00:15:57.606 05:05:26 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:57.606 05:05:26 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:57.606 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:57.606 fio-3.35 00:15:57.606 Starting 1 process 00:16:02.875 05:05:31 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 84991 00:16:02.875 05:05:31 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:08.164 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 84991 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:08.164 05:05:36 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=85138 00:16:08.164 05:05:36 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:08.164 05:05:36 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 85138 00:16:08.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.164 05:05:36 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85138 ']' 00:16:08.164 05:05:36 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.164 05:05:36 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.164 05:05:36 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.164 05:05:36 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.164 05:05:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.164 05:05:36 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:08.164 [2024-11-28 05:05:36.757525] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:08.164 [2024-11-28 05:05:36.757645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85138 ] 00:16:08.164 [2024-11-28 05:05:36.896813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:08.164 [2024-11-28 05:05:36.922040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.164 [2024-11-28 05:05:36.922131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.422 05:05:37 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.422 05:05:37 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:08.422 05:05:37 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:08.422 05:05:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.422 05:05:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.422 [2024-11-28 05:05:37.595195] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:08.422 [2024-11-28 05:05:37.596439] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:08.422 05:05:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.422 05:05:37 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:08.422 05:05:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.422 05:05:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.422 malloc0 00:16:08.422 05:05:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.422 05:05:37 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:08.422 05:05:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.422 05:05:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.422 [2024-11-28 05:05:37.635305] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:08.422 [2024-11-28 05:05:37.635343] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:08.422 [2024-11-28 05:05:37.635349] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:08.422 [2024-11-28 05:05:37.641268] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:08.422 [2024-11-28 05:05:37.641304] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:08.422 1 00:16:08.422 05:05:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.422 05:05:37 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 85019 00:16:09.797 [2024-11-28 05:05:38.641333] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:09.797 [2024-11-28 05:05:38.650200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:09.797 [2024-11-28 05:05:38.650221] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:10.732 [2024-11-28 05:05:39.650241] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:10.732 [2024-11-28 05:05:39.651242] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:10.732 [2024-11-28 05:05:39.651249] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:11.667 [2024-11-28 05:05:40.654210] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:11.667 [2024-11-28 05:05:40.661210] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:11.667 [2024-11-28 05:05:40.661228] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:11.667 [2024-11-28 05:05:40.661235] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:11.667 [2024-11-28 05:05:40.661310] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:33.700 [2024-11-28 05:06:01.950206] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:33.700 [2024-11-28 05:06:01.956807] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:33.700 [2024-11-28 05:06:01.964441] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:33.700 [2024-11-28 05:06:01.964458] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:00.245 00:17:00.245 fio_test: (groupid=0, jobs=1): err= 0: pid=85026: Thu Nov 28 05:06:26 2024 00:17:00.245 read: IOPS=14.7k, BW=57.6MiB/s (60.4MB/s)(3454MiB/60001msec) 00:17:00.245 slat (nsec): min=1162, max=495289, avg=4971.99, stdev=1426.45 00:17:00.245 clat (usec): min=1078, max=30294k, avg=4436.52, stdev=265614.57 00:17:00.245 lat (usec): min=1088, max=30294k, avg=4441.49, stdev=265614.57 00:17:00.245 clat percentiles (usec): 00:17:00.245 | 1.00th=[ 1745], 5.00th=[ 1876], 10.00th=[ 1893], 20.00th=[ 1926], 00:17:00.245 | 30.00th=[ 1942], 40.00th=[ 1958], 50.00th=[ 1975], 60.00th=[ 1991], 00:17:00.245 | 70.00th=[ 2008], 80.00th=[ 2057], 90.00th=[ 2147], 95.00th=[ 2868], 00:17:00.245 | 99.00th=[ 5014], 99.50th=[ 5604], 99.90th=[ 7308], 99.95th=[ 8848], 00:17:00.245 | 99.99th=[13042] 00:17:00.245 bw ( KiB/s): min=50576, max=124440, per=100.00%, avg=117949.42, stdev=14219.28, samples=59 00:17:00.245 iops : min=12644, max=31110, avg=29487.36, stdev=3554.82, samples=59 00:17:00.245 write: IOPS=14.7k, BW=57.5MiB/s (60.3MB/s)(3449MiB/60001msec); 0 zone resets 00:17:00.245 slat (nsec): min=1195, max=1051.8k, avg=5035.04, stdev=1804.68 00:17:00.245 clat (usec): min=1198, max=30294k, avg=4244.13, stdev=249698.24 00:17:00.245 lat (usec): min=1208, max=30294k, avg=4249.17, stdev=249698.25 00:17:00.245 clat percentiles (usec): 00:17:00.245 | 1.00th=[ 1795], 5.00th=[ 1958], 10.00th=[ 1991], 20.00th=[ 2024], 00:17:00.245 | 30.00th=[ 2040], 40.00th=[ 2057], 50.00th=[ 2073], 60.00th=[ 2089], 00:17:00.245 | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2245], 95.00th=[ 2802], 00:17:00.245 | 99.00th=[ 5014], 99.50th=[ 5669], 99.90th=[ 7373], 99.95th=[ 8979], 00:17:00.245 | 99.99th=[13173] 00:17:00.245 bw ( KiB/s): min=51512, max=124288, per=100.00%, avg=117775.05, stdev=14068.85, samples=59 00:17:00.245 iops : min=12878, max=31072, avg=29443.76, stdev=3517.21, samples=59 00:17:00.245 lat (msec) : 2=38.64%, 4=58.72%, 10=2.59%, 20=0.04%, >=2000=0.01% 00:17:00.245 cpu : usr=3.36%, sys=14.88%, ctx=58273, majf=0, minf=13 00:17:00.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:00.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.245 issued rwts: total=884288,882913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.245 00:17:00.245 Run status group 0 (all jobs): 00:17:00.245 READ: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=3454MiB (3622MB), run=60001-60001msec 00:17:00.245 WRITE: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=3449MiB (3616MB), run=60001-60001msec 00:17:00.245 00:17:00.245 Disk stats (read/write): 00:17:00.245 ublkb1: ios=880961/879607, merge=0/0, ticks=3873225/3624842, in_queue=7498068, util=99.88% 00:17:00.245 05:06:26 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:00.245 05:06:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.245 05:06:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:00.245 [2024-11-28 05:06:26.926337] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:00.245 [2024-11-28 05:06:26.975230] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:00.245 [2024-11-28 05:06:26.975465] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:00.245 [2024-11-28 05:06:26.983214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:00.245 [2024-11-28 05:06:26.983379] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:00.245 [2024-11-28 05:06:26.983442] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:00.245 05:06:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.245 05:06:26 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:00.245 05:06:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.245 05:06:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:00.245 [2024-11-28 05:06:26.998264] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:00.245 [2024-11-28 05:06:26.999559] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:00.245 [2024-11-28 05:06:26.999588] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.245 05:06:27 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:00.245 05:06:27 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:00.245 05:06:27 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 85138 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 85138 ']' 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 85138 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85138 00:17:00.245 killing process with pid 85138 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85138' 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@973 -- # kill 85138 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@978 -- # wait 85138 00:17:00.245 [2024-11-28 05:06:27.202028] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:00.245 [2024-11-28 05:06:27.202075] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:00.245 ************************************ 00:17:00.245 END TEST ublk_recovery 00:17:00.245 ************************************ 00:17:00.245 00:17:00.245 real 1m2.887s 00:17:00.245 user 1m45.773s 00:17:00.245 sys 0m20.393s 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.245 05:06:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:00.245 05:06:27 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:00.245 05:06:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:00.245 05:06:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:00.245 05:06:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:00.245 05:06:27 -- common/autotest_common.sh@10 -- # set +x 00:17:00.245 05:06:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:00.245 05:06:27 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:00.245 05:06:27 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:00.245 05:06:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:00.245 05:06:27 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:00.245 05:06:27 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:00.245 05:06:27 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:00.245 05:06:27 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:00.245 05:06:27 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:00.245 05:06:27 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:00.245 05:06:27 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:00.245 05:06:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:00.245 05:06:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.245 05:06:27 -- common/autotest_common.sh@10 -- # set +x 00:17:00.245 ************************************ 00:17:00.245 START TEST ftl 00:17:00.245 ************************************ 00:17:00.245 05:06:27 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:00.245 * Looking for test storage... 00:17:00.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:00.245 05:06:27 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:00.246 05:06:27 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:00.246 05:06:27 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:00.246 05:06:27 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:00.246 05:06:27 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.246 05:06:27 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.246 05:06:27 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.246 05:06:27 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.246 05:06:27 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.246 05:06:27 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.246 05:06:27 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.246 05:06:27 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.246 05:06:27 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.246 05:06:27 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.246 05:06:27 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.246 05:06:27 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:00.246 05:06:27 ftl -- scripts/common.sh@345 -- # : 1 00:17:00.246 05:06:27 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.246 05:06:27 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.246 05:06:27 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:00.246 05:06:27 ftl -- scripts/common.sh@353 -- # local d=1 00:17:00.246 05:06:27 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.246 05:06:27 ftl -- scripts/common.sh@355 -- # echo 1 00:17:00.246 05:06:27 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.246 05:06:27 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:00.246 05:06:27 ftl -- scripts/common.sh@353 -- # local d=2 00:17:00.246 05:06:27 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.246 05:06:27 ftl -- scripts/common.sh@355 -- # echo 2 00:17:00.246 05:06:27 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.246 05:06:27 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.246 05:06:27 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.246 05:06:27 ftl -- scripts/common.sh@368 -- # return 0 00:17:00.246 05:06:27 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.246 05:06:27 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:00.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.246 --rc genhtml_branch_coverage=1 00:17:00.246 --rc genhtml_function_coverage=1 00:17:00.246 --rc genhtml_legend=1 00:17:00.246 --rc geninfo_all_blocks=1 00:17:00.246 --rc geninfo_unexecuted_blocks=1 00:17:00.246 00:17:00.246 ' 00:17:00.246 05:06:27 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:00.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.246 --rc genhtml_branch_coverage=1 00:17:00.246 --rc genhtml_function_coverage=1 00:17:00.246 --rc genhtml_legend=1 00:17:00.246 --rc geninfo_all_blocks=1 00:17:00.246 --rc geninfo_unexecuted_blocks=1 00:17:00.246 00:17:00.246 ' 00:17:00.246 05:06:27 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:00.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.246 --rc genhtml_branch_coverage=1 00:17:00.246 --rc genhtml_function_coverage=1 00:17:00.246 --rc genhtml_legend=1 00:17:00.246 --rc geninfo_all_blocks=1 00:17:00.246 --rc geninfo_unexecuted_blocks=1 00:17:00.246 00:17:00.246 ' 00:17:00.246 05:06:27 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:00.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.246 --rc genhtml_branch_coverage=1 00:17:00.246 --rc genhtml_function_coverage=1 00:17:00.246 --rc genhtml_legend=1 00:17:00.246 --rc geninfo_all_blocks=1 00:17:00.246 --rc geninfo_unexecuted_blocks=1 00:17:00.246 00:17:00.246 ' 00:17:00.246 05:06:27 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:00.246 05:06:27 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:00.246 05:06:27 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:00.246 05:06:27 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:00.246 05:06:27 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:00.246 05:06:27 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:00.246 05:06:27 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.246 05:06:27 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:00.246 05:06:27 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:00.246 05:06:27 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:00.246 05:06:27 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:00.246 05:06:27 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:00.246 05:06:27 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:00.246 05:06:27 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:00.246 05:06:27 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:00.246 05:06:27 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:00.246 05:06:27 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:00.246 05:06:27 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:00.246 05:06:27 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:00.246 05:06:27 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:00.246 05:06:27 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:00.246 05:06:27 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:00.246 05:06:27 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:00.246 05:06:27 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:00.246 05:06:27 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:00.246 05:06:27 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:00.246 05:06:27 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:00.246 05:06:27 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:00.246 05:06:27 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:00.246 05:06:27 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.246 05:06:27 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:00.246 05:06:27 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:00.246 05:06:27 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:00.246 05:06:27 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:00.246 05:06:27 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:00.246 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:00.246 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:00.246 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:00.246 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:00.246 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:00.246 05:06:28 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=85934 00:17:00.246 05:06:28 ftl -- ftl/ftl.sh@38 -- # waitforlisten 85934 00:17:00.246 05:06:28 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:00.246 05:06:28 ftl -- common/autotest_common.sh@835 -- # '[' -z 85934 ']' 00:17:00.246 05:06:28 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.246 05:06:28 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.246 05:06:28 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.246 05:06:28 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.246 05:06:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:00.246 [2024-11-28 05:06:28.318258] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:17:00.247 [2024-11-28 05:06:28.318584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85934 ] 00:17:00.247 [2024-11-28 05:06:28.464104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.247 [2024-11-28 05:06:28.485755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.247 05:06:29 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.247 05:06:29 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:00.247 05:06:29 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:00.247 05:06:29 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:00.508 05:06:29 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:00.508 05:06:29 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:01.082 05:06:30 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:01.082 05:06:30 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:01.082 05:06:30 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:01.343 05:06:30 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:01.343 05:06:30 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:01.343 05:06:30 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:01.343 05:06:30 ftl -- ftl/ftl.sh@50 -- # break 00:17:01.343 05:06:30 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:01.343 05:06:30 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:01.343 05:06:30 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:01.343 05:06:30 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:01.605 05:06:30 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:01.605 05:06:30 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:01.605 05:06:30 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:01.605 05:06:30 ftl -- ftl/ftl.sh@63 -- # break 00:17:01.605 05:06:30 ftl -- ftl/ftl.sh@66 -- # killprocess 85934 00:17:01.605 05:06:30 ftl -- common/autotest_common.sh@954 -- # '[' -z 85934 ']' 00:17:01.605 05:06:30 ftl -- common/autotest_common.sh@958 -- # kill -0 85934 00:17:01.605 05:06:30 ftl -- common/autotest_common.sh@959 -- # uname 00:17:01.605 05:06:30 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.605 05:06:30 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85934 00:17:01.605 killing process with pid 85934 00:17:01.605 05:06:30 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.605 05:06:30 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.605 05:06:30 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85934' 00:17:01.605 05:06:30 ftl -- common/autotest_common.sh@973 -- # kill 85934 00:17:01.605 05:06:30 ftl -- common/autotest_common.sh@978 -- # wait 85934 00:17:01.867 05:06:31 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:01.867 05:06:31 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:01.867 05:06:31 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:01.867 05:06:31 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.867 05:06:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:01.867 ************************************ 00:17:01.867 START TEST ftl_fio_basic 00:17:01.867 ************************************ 00:17:01.867 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:01.867 * Looking for test storage... 00:17:01.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.867 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:01.867 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:01.867 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:02.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.130 --rc genhtml_branch_coverage=1 00:17:02.130 --rc genhtml_function_coverage=1 00:17:02.130 --rc genhtml_legend=1 00:17:02.130 --rc geninfo_all_blocks=1 00:17:02.130 --rc geninfo_unexecuted_blocks=1 00:17:02.130 00:17:02.130 ' 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:02.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.130 --rc genhtml_branch_coverage=1 00:17:02.130 --rc genhtml_function_coverage=1 00:17:02.130 --rc genhtml_legend=1 00:17:02.130 --rc geninfo_all_blocks=1 00:17:02.130 --rc geninfo_unexecuted_blocks=1 00:17:02.130 00:17:02.130 ' 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:02.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.130 --rc genhtml_branch_coverage=1 00:17:02.130 --rc genhtml_function_coverage=1 00:17:02.130 --rc genhtml_legend=1 00:17:02.130 --rc geninfo_all_blocks=1 00:17:02.130 --rc geninfo_unexecuted_blocks=1 00:17:02.130 00:17:02.130 ' 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:02.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.130 --rc genhtml_branch_coverage=1 00:17:02.130 --rc genhtml_function_coverage=1 00:17:02.130 --rc genhtml_legend=1 00:17:02.130 --rc geninfo_all_blocks=1 00:17:02.130 --rc geninfo_unexecuted_blocks=1 00:17:02.130 00:17:02.130 ' 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:02.130 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:02.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=86050 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 86050 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 86050 ']' 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.131 05:06:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:02.131 [2024-11-28 05:06:31.293226] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:17:02.131 [2024-11-28 05:06:31.293375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86050 ] 00:17:02.393 [2024-11-28 05:06:31.441436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:02.393 [2024-11-28 05:06:31.473254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.393 [2024-11-28 05:06:31.473473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.393 [2024-11-28 05:06:31.473626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.965 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.965 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:02.965 05:06:32 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:02.965 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:02.965 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:02.965 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:02.965 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:02.965 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:03.225 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:03.226 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:03.226 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:03.226 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:03.226 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:03.226 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:03.226 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:03.226 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:03.487 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:03.487 { 00:17:03.487 "name": "nvme0n1", 00:17:03.487 "aliases": [ 00:17:03.487 "0c3d8b94-f401-41b2-9d4c-8a550cd3e34f" 00:17:03.487 ], 00:17:03.487 "product_name": "NVMe disk", 00:17:03.487 "block_size": 4096, 00:17:03.487 "num_blocks": 1310720, 00:17:03.487 "uuid": "0c3d8b94-f401-41b2-9d4c-8a550cd3e34f", 00:17:03.487 "numa_id": -1, 00:17:03.487 "assigned_rate_limits": { 00:17:03.487 "rw_ios_per_sec": 0, 00:17:03.487 "rw_mbytes_per_sec": 0, 00:17:03.487 "r_mbytes_per_sec": 0, 00:17:03.487 "w_mbytes_per_sec": 0 00:17:03.487 }, 00:17:03.487 "claimed": false, 00:17:03.487 "zoned": false, 00:17:03.487 "supported_io_types": { 00:17:03.487 "read": true, 00:17:03.487 "write": true, 00:17:03.487 "unmap": true, 00:17:03.487 "flush": true, 00:17:03.487 "reset": true, 00:17:03.487 "nvme_admin": true, 00:17:03.487 "nvme_io": true, 00:17:03.487 "nvme_io_md": false, 00:17:03.487 "write_zeroes": true, 00:17:03.487 "zcopy": false, 00:17:03.487 "get_zone_info": false, 00:17:03.487 "zone_management": false, 00:17:03.487 "zone_append": false, 00:17:03.487 "compare": true, 00:17:03.487 "compare_and_write": false, 00:17:03.487 "abort": true, 00:17:03.487 "seek_hole": false, 00:17:03.487 "seek_data": false, 00:17:03.487 "copy": true, 00:17:03.487 "nvme_iov_md": false 00:17:03.487 }, 00:17:03.487 "driver_specific": { 00:17:03.487 "nvme": [ 00:17:03.487 { 00:17:03.487 "pci_address": "0000:00:11.0", 00:17:03.487 "trid": { 00:17:03.487 "trtype": "PCIe", 00:17:03.487 "traddr": "0000:00:11.0" 00:17:03.487 }, 00:17:03.487 "ctrlr_data": { 00:17:03.487 "cntlid": 0, 00:17:03.487 "vendor_id": "0x1b36", 00:17:03.487 "model_number": "QEMU NVMe Ctrl", 00:17:03.487 "serial_number": "12341", 00:17:03.487 "firmware_revision": "8.0.0", 00:17:03.487 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:03.487 "oacs": { 00:17:03.487 "security": 0, 00:17:03.487 "format": 1, 00:17:03.487 "firmware": 0, 00:17:03.487 "ns_manage": 1 00:17:03.487 }, 00:17:03.487 "multi_ctrlr": false, 00:17:03.487 "ana_reporting": false 00:17:03.487 }, 00:17:03.487 "vs": { 00:17:03.487 "nvme_version": "1.4" 00:17:03.487 }, 00:17:03.487 "ns_data": { 00:17:03.487 "id": 1, 00:17:03.487 "can_share": false 00:17:03.487 } 00:17:03.487 } 00:17:03.487 ], 00:17:03.487 "mp_policy": "active_passive" 00:17:03.487 } 00:17:03.487 } 00:17:03.487 ]' 00:17:03.487 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:03.487 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:03.487 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:03.487 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:03.487 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:03.487 05:06:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:03.487 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:03.487 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:03.487 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:03.487 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:03.487 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:03.749 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:03.749 05:06:32 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=9359defc-9ce4-444d-ad4b-d6a8d41989c4 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9359defc-9ce4-444d-ad4b-d6a8d41989c4 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:04.010 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 00:17:04.272 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:04.272 { 00:17:04.272 "name": "1e1f0a5f-9b09-4040-90e0-abfb4c7043c4", 00:17:04.272 "aliases": [ 00:17:04.272 "lvs/nvme0n1p0" 00:17:04.272 ], 00:17:04.272 "product_name": "Logical Volume", 00:17:04.272 "block_size": 4096, 00:17:04.272 "num_blocks": 26476544, 00:17:04.272 "uuid": "1e1f0a5f-9b09-4040-90e0-abfb4c7043c4", 00:17:04.272 "assigned_rate_limits": { 00:17:04.272 "rw_ios_per_sec": 0, 00:17:04.272 "rw_mbytes_per_sec": 0, 00:17:04.272 "r_mbytes_per_sec": 0, 00:17:04.272 "w_mbytes_per_sec": 0 00:17:04.272 }, 00:17:04.272 "claimed": false, 00:17:04.272 "zoned": false, 00:17:04.272 "supported_io_types": { 00:17:04.272 "read": true, 00:17:04.272 "write": true, 00:17:04.272 "unmap": true, 00:17:04.272 "flush": false, 00:17:04.272 "reset": true, 00:17:04.272 "nvme_admin": false, 00:17:04.272 "nvme_io": false, 00:17:04.272 "nvme_io_md": false, 00:17:04.272 "write_zeroes": true, 00:17:04.272 "zcopy": false, 00:17:04.272 "get_zone_info": false, 00:17:04.272 "zone_management": false, 00:17:04.272 "zone_append": false, 00:17:04.272 "compare": false, 00:17:04.272 "compare_and_write": false, 00:17:04.272 "abort": false, 00:17:04.272 "seek_hole": true, 00:17:04.272 "seek_data": true, 00:17:04.272 "copy": false, 00:17:04.272 "nvme_iov_md": false 00:17:04.272 }, 00:17:04.272 "driver_specific": { 00:17:04.272 "lvol": { 00:17:04.272 "lvol_store_uuid": "9359defc-9ce4-444d-ad4b-d6a8d41989c4", 00:17:04.272 "base_bdev": "nvme0n1", 00:17:04.272 "thin_provision": true, 00:17:04.272 "num_allocated_clusters": 0, 00:17:04.272 "snapshot": false, 00:17:04.272 "clone": false, 00:17:04.272 "esnap_clone": false 00:17:04.272 } 00:17:04.272 } 00:17:04.272 } 00:17:04.272 ]' 00:17:04.272 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:04.272 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:04.272 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:04.272 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:04.272 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:04.272 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:04.272 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:04.272 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:04.272 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:04.534 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:04.534 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:04.534 05:06:33 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 00:17:04.534 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 00:17:04.534 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:04.534 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:04.534 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:04.534 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 00:17:04.793 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:04.793 { 00:17:04.793 "name": "1e1f0a5f-9b09-4040-90e0-abfb4c7043c4", 00:17:04.793 "aliases": [ 00:17:04.793 "lvs/nvme0n1p0" 00:17:04.793 ], 00:17:04.793 "product_name": "Logical Volume", 00:17:04.793 "block_size": 4096, 00:17:04.793 "num_blocks": 26476544, 00:17:04.793 "uuid": "1e1f0a5f-9b09-4040-90e0-abfb4c7043c4", 00:17:04.793 "assigned_rate_limits": { 00:17:04.794 "rw_ios_per_sec": 0, 00:17:04.794 "rw_mbytes_per_sec": 0, 00:17:04.794 "r_mbytes_per_sec": 0, 00:17:04.794 "w_mbytes_per_sec": 0 00:17:04.794 }, 00:17:04.794 "claimed": false, 00:17:04.794 "zoned": false, 00:17:04.794 "supported_io_types": { 00:17:04.794 "read": true, 00:17:04.794 "write": true, 00:17:04.794 "unmap": true, 00:17:04.794 "flush": false, 00:17:04.794 "reset": true, 00:17:04.794 "nvme_admin": false, 00:17:04.794 "nvme_io": false, 00:17:04.794 "nvme_io_md": false, 00:17:04.794 "write_zeroes": true, 00:17:04.794 "zcopy": false, 00:17:04.794 "get_zone_info": false, 00:17:04.794 "zone_management": false, 00:17:04.794 "zone_append": false, 00:17:04.794 "compare": false, 00:17:04.794 "compare_and_write": false, 00:17:04.794 "abort": false, 00:17:04.794 "seek_hole": true, 00:17:04.794 "seek_data": true, 00:17:04.794 "copy": false, 00:17:04.794 "nvme_iov_md": false 00:17:04.794 }, 00:17:04.794 "driver_specific": { 00:17:04.794 "lvol": { 00:17:04.794 "lvol_store_uuid": "9359defc-9ce4-444d-ad4b-d6a8d41989c4", 00:17:04.794 "base_bdev": "nvme0n1", 00:17:04.794 "thin_provision": true, 00:17:04.794 "num_allocated_clusters": 0, 00:17:04.794 "snapshot": false, 00:17:04.794 "clone": false, 00:17:04.794 "esnap_clone": false 00:17:04.794 } 00:17:04.794 } 00:17:04.794 } 00:17:04.794 ]' 00:17:04.794 05:06:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:04.794 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:04.794 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:04.794 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:04.794 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:04.794 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:04.794 05:06:34 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:04.794 05:06:34 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:05.052 05:06:34 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:05.052 05:06:34 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:05.052 05:06:34 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:05.053 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:05.053 05:06:34 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 00:17:05.053 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 00:17:05.053 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:05.053 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:05.053 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:05.053 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 00:17:05.311 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:05.312 { 00:17:05.312 "name": "1e1f0a5f-9b09-4040-90e0-abfb4c7043c4", 00:17:05.312 "aliases": [ 00:17:05.312 "lvs/nvme0n1p0" 00:17:05.312 ], 00:17:05.312 "product_name": "Logical Volume", 00:17:05.312 "block_size": 4096, 00:17:05.312 "num_blocks": 26476544, 00:17:05.312 "uuid": "1e1f0a5f-9b09-4040-90e0-abfb4c7043c4", 00:17:05.312 "assigned_rate_limits": { 00:17:05.312 "rw_ios_per_sec": 0, 00:17:05.312 "rw_mbytes_per_sec": 0, 00:17:05.312 "r_mbytes_per_sec": 0, 00:17:05.312 "w_mbytes_per_sec": 0 00:17:05.312 }, 00:17:05.312 "claimed": false, 00:17:05.312 "zoned": false, 00:17:05.312 "supported_io_types": { 00:17:05.312 "read": true, 00:17:05.312 "write": true, 00:17:05.312 "unmap": true, 00:17:05.312 "flush": false, 00:17:05.312 "reset": true, 00:17:05.312 "nvme_admin": false, 00:17:05.312 "nvme_io": false, 00:17:05.312 "nvme_io_md": false, 00:17:05.312 "write_zeroes": true, 00:17:05.312 "zcopy": false, 00:17:05.312 "get_zone_info": false, 00:17:05.312 "zone_management": false, 00:17:05.312 "zone_append": false, 00:17:05.312 "compare": false, 00:17:05.312 "compare_and_write": false, 00:17:05.312 "abort": false, 00:17:05.312 "seek_hole": true, 00:17:05.312 "seek_data": true, 00:17:05.312 "copy": false, 00:17:05.312 "nvme_iov_md": false 00:17:05.312 }, 00:17:05.312 "driver_specific": { 00:17:05.312 "lvol": { 00:17:05.312 "lvol_store_uuid": "9359defc-9ce4-444d-ad4b-d6a8d41989c4", 00:17:05.312 "base_bdev": "nvme0n1", 00:17:05.312 "thin_provision": true, 00:17:05.312 "num_allocated_clusters": 0, 00:17:05.312 "snapshot": false, 00:17:05.312 "clone": false, 00:17:05.312 "esnap_clone": false 00:17:05.312 } 00:17:05.312 } 00:17:05.312 } 00:17:05.312 ]' 00:17:05.312 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:05.312 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:05.312 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:05.312 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:05.312 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:05.312 05:06:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:05.312 05:06:34 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:05.312 05:06:34 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:05.312 05:06:34 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1e1f0a5f-9b09-4040-90e0-abfb4c7043c4 -c nvc0n1p0 --l2p_dram_limit 60 00:17:05.571 [2024-11-28 05:06:34.679448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.571 [2024-11-28 05:06:34.679504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:05.571 [2024-11-28 05:06:34.679525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:05.571 [2024-11-28 05:06:34.679541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-28 05:06:34.679594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.571 [2024-11-28 05:06:34.679612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.571 [2024-11-28 05:06:34.679619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:05.571 [2024-11-28 05:06:34.679627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-28 05:06:34.679653] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:05.571 [2024-11-28 05:06:34.679923] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:05.571 [2024-11-28 05:06:34.679942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.571 [2024-11-28 05:06:34.679952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.571 [2024-11-28 05:06:34.679959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:17:05.571 [2024-11-28 05:06:34.679968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-28 05:06:34.680003] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b9cb94b5-949c-457c-9a4a-cee724e91a57 00:17:05.571 [2024-11-28 05:06:34.681280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.571 [2024-11-28 05:06:34.681306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:05.571 [2024-11-28 05:06:34.681316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:05.571 [2024-11-28 05:06:34.681323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-28 05:06:34.688079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.571 [2024-11-28 05:06:34.688113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.571 [2024-11-28 05:06:34.688125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.672 ms 00:17:05.571 [2024-11-28 05:06:34.688131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-28 05:06:34.688229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.571 [2024-11-28 05:06:34.688239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.571 [2024-11-28 05:06:34.688258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:05.571 [2024-11-28 05:06:34.688264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-28 05:06:34.688310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.571 [2024-11-28 05:06:34.688324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:05.571 [2024-11-28 05:06:34.688334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:05.571 [2024-11-28 05:06:34.688343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-28 05:06:34.688372] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:05.571 [2024-11-28 05:06:34.689975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.571 [2024-11-28 05:06:34.690004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.571 [2024-11-28 05:06:34.690021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.609 ms 00:17:05.571 [2024-11-28 05:06:34.690030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-28 05:06:34.690067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.571 [2024-11-28 05:06:34.690075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:05.571 [2024-11-28 05:06:34.690082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:05.571 [2024-11-28 05:06:34.690094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-28 05:06:34.690113] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:05.571 [2024-11-28 05:06:34.690236] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:05.571 [2024-11-28 05:06:34.690248] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:05.571 [2024-11-28 05:06:34.690258] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:05.571 [2024-11-28 05:06:34.690269] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:05.571 [2024-11-28 05:06:34.690278] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:05.571 [2024-11-28 05:06:34.690285] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:05.571 [2024-11-28 05:06:34.690302] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:05.571 [2024-11-28 05:06:34.690319] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:05.571 [2024-11-28 05:06:34.690327] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:05.571 [2024-11-28 05:06:34.690341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.571 [2024-11-28 05:06:34.690349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:05.571 [2024-11-28 05:06:34.690355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:17:05.571 [2024-11-28 05:06:34.690362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-28 05:06:34.690436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.571 [2024-11-28 05:06:34.690450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:05.571 [2024-11-28 05:06:34.690457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:05.571 [2024-11-28 05:06:34.690464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-28 05:06:34.690555] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:05.571 [2024-11-28 05:06:34.690573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:05.571 [2024-11-28 05:06:34.690588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.571 [2024-11-28 05:06:34.690596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.571 [2024-11-28 05:06:34.690603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:05.571 [2024-11-28 05:06:34.690610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:05.571 [2024-11-28 05:06:34.690615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:05.571 [2024-11-28 05:06:34.690623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:05.571 [2024-11-28 05:06:34.690631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:05.572 [2024-11-28 05:06:34.690638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.572 [2024-11-28 05:06:34.690650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:05.572 [2024-11-28 05:06:34.690660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:05.572 [2024-11-28 05:06:34.690666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.572 [2024-11-28 05:06:34.690674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:05.572 [2024-11-28 05:06:34.690681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:05.572 [2024-11-28 05:06:34.690688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.572 [2024-11-28 05:06:34.690695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:05.572 [2024-11-28 05:06:34.690703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:05.572 [2024-11-28 05:06:34.690709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.572 [2024-11-28 05:06:34.690717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:05.572 [2024-11-28 05:06:34.690723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:05.572 [2024-11-28 05:06:34.690730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.572 [2024-11-28 05:06:34.690737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:05.572 [2024-11-28 05:06:34.690744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:05.572 [2024-11-28 05:06:34.690750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.572 [2024-11-28 05:06:34.690758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:05.572 [2024-11-28 05:06:34.690765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:05.572 [2024-11-28 05:06:34.690772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.572 [2024-11-28 05:06:34.690778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:05.572 [2024-11-28 05:06:34.690786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:05.572 [2024-11-28 05:06:34.690792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.572 [2024-11-28 05:06:34.690828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:05.572 [2024-11-28 05:06:34.690839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:05.572 [2024-11-28 05:06:34.690847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.572 [2024-11-28 05:06:34.690853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:05.572 [2024-11-28 05:06:34.690861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:05.572 [2024-11-28 05:06:34.690867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.572 [2024-11-28 05:06:34.690876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:05.572 [2024-11-28 05:06:34.690882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:05.572 [2024-11-28 05:06:34.690890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.572 [2024-11-28 05:06:34.690897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:05.572 [2024-11-28 05:06:34.690905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:05.572 [2024-11-28 05:06:34.690914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.572 [2024-11-28 05:06:34.690921] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:05.572 [2024-11-28 05:06:34.690928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:05.572 [2024-11-28 05:06:34.690940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.572 [2024-11-28 05:06:34.690955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.572 [2024-11-28 05:06:34.690964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:05.572 [2024-11-28 05:06:34.690971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:05.572 [2024-11-28 05:06:34.690978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:05.572 [2024-11-28 05:06:34.690985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:05.572 [2024-11-28 05:06:34.690992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:05.572 [2024-11-28 05:06:34.690998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:05.572 [2024-11-28 05:06:34.691009] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:05.572 [2024-11-28 05:06:34.691016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.572 [2024-11-28 05:06:34.691025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:05.572 [2024-11-28 05:06:34.691031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:05.572 [2024-11-28 05:06:34.691037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:05.572 [2024-11-28 05:06:34.691042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:05.572 [2024-11-28 05:06:34.691050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:05.572 [2024-11-28 05:06:34.691055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:05.572 [2024-11-28 05:06:34.691063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:05.572 [2024-11-28 05:06:34.691068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:05.572 [2024-11-28 05:06:34.691075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:05.572 [2024-11-28 05:06:34.691080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:05.572 [2024-11-28 05:06:34.691087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:05.572 [2024-11-28 05:06:34.691093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:05.572 [2024-11-28 05:06:34.691099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:05.572 [2024-11-28 05:06:34.691104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:05.572 [2024-11-28 05:06:34.691111] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:05.572 [2024-11-28 05:06:34.691125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.572 [2024-11-28 05:06:34.691132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:05.572 [2024-11-28 05:06:34.691138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:05.572 [2024-11-28 05:06:34.691144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:05.572 [2024-11-28 05:06:34.691153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:05.572 [2024-11-28 05:06:34.691171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.572 [2024-11-28 05:06:34.691188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:05.572 [2024-11-28 05:06:34.691198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:17:05.572 [2024-11-28 05:06:34.691203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.572 [2024-11-28 05:06:34.691273] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:05.572 [2024-11-28 05:06:34.691281] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:08.105 [2024-11-28 05:06:37.003445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.105 [2024-11-28 05:06:37.003492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:08.105 [2024-11-28 05:06:37.003509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2312.159 ms 00:17:08.105 [2024-11-28 05:06:37.003517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.105 [2024-11-28 05:06:37.011971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.105 [2024-11-28 05:06:37.012019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.105 [2024-11-28 05:06:37.012044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.368 ms 00:17:08.105 [2024-11-28 05:06:37.012052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.105 [2024-11-28 05:06:37.012196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.105 [2024-11-28 05:06:37.012212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:08.105 [2024-11-28 05:06:37.012224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:08.105 [2024-11-28 05:06:37.012231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.105 [2024-11-28 05:06:37.028351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.105 [2024-11-28 05:06:37.028395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.105 [2024-11-28 05:06:37.028409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.085 ms 00:17:08.105 [2024-11-28 05:06:37.028417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.105 [2024-11-28 05:06:37.028466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.105 [2024-11-28 05:06:37.028486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.105 [2024-11-28 05:06:37.028496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:08.105 [2024-11-28 05:06:37.028513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.105 [2024-11-28 05:06:37.028866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.105 [2024-11-28 05:06:37.028896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.106 [2024-11-28 05:06:37.028909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:08.106 [2024-11-28 05:06:37.028917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.029045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.029061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.106 [2024-11-28 05:06:37.029074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:08.106 [2024-11-28 05:06:37.029092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.035140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.035190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.106 [2024-11-28 05:06:37.035230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.012 ms 00:17:08.106 [2024-11-28 05:06:37.035242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.044751] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:08.106 [2024-11-28 05:06:37.059167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.059221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:08.106 [2024-11-28 05:06:37.059231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.823 ms 00:17:08.106 [2024-11-28 05:06:37.059240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.096854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.096894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:08.106 [2024-11-28 05:06:37.096905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.573 ms 00:17:08.106 [2024-11-28 05:06:37.096917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.097105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.097122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:08.106 [2024-11-28 05:06:37.097131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:08.106 [2024-11-28 05:06:37.097140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.099813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.099847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:08.106 [2024-11-28 05:06:37.099857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:17:08.106 [2024-11-28 05:06:37.099868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.102078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.102111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:08.106 [2024-11-28 05:06:37.102120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.167 ms 00:17:08.106 [2024-11-28 05:06:37.102129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.102437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.102456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:08.106 [2024-11-28 05:06:37.102476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:17:08.106 [2024-11-28 05:06:37.102487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.124052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.124087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:08.106 [2024-11-28 05:06:37.124108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.537 ms 00:17:08.106 [2024-11-28 05:06:37.124117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.127681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.127717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:08.106 [2024-11-28 05:06:37.127728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.486 ms 00:17:08.106 [2024-11-28 05:06:37.127738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.130573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.130608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:08.106 [2024-11-28 05:06:37.130617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:17:08.106 [2024-11-28 05:06:37.130626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.133491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.133527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:08.106 [2024-11-28 05:06:37.133537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.824 ms 00:17:08.106 [2024-11-28 05:06:37.133549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.133596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.133608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:08.106 [2024-11-28 05:06:37.133617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:08.106 [2024-11-28 05:06:37.133627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.133730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.106 [2024-11-28 05:06:37.133745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:08.106 [2024-11-28 05:06:37.133754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:08.106 [2024-11-28 05:06:37.133764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.106 [2024-11-28 05:06:37.134697] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2454.824 ms, result 0 00:17:08.106 { 00:17:08.106 "name": "ftl0", 00:17:08.106 "uuid": "b9cb94b5-949c-457c-9a4a-cee724e91a57" 00:17:08.106 } 00:17:08.106 05:06:37 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:08.106 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:08.106 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:08.106 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:08.106 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:08.106 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:08.106 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:08.106 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:08.366 [ 00:17:08.366 { 00:17:08.366 "name": "ftl0", 00:17:08.366 "aliases": [ 00:17:08.366 "b9cb94b5-949c-457c-9a4a-cee724e91a57" 00:17:08.366 ], 00:17:08.366 "product_name": "FTL disk", 00:17:08.366 "block_size": 4096, 00:17:08.366 "num_blocks": 20971520, 00:17:08.366 "uuid": "b9cb94b5-949c-457c-9a4a-cee724e91a57", 00:17:08.366 "assigned_rate_limits": { 00:17:08.366 "rw_ios_per_sec": 0, 00:17:08.366 "rw_mbytes_per_sec": 0, 00:17:08.366 "r_mbytes_per_sec": 0, 00:17:08.366 "w_mbytes_per_sec": 0 00:17:08.366 }, 00:17:08.366 "claimed": false, 00:17:08.366 "zoned": false, 00:17:08.366 "supported_io_types": { 00:17:08.366 "read": true, 00:17:08.366 "write": true, 00:17:08.366 "unmap": true, 00:17:08.366 "flush": true, 00:17:08.366 "reset": false, 00:17:08.366 "nvme_admin": false, 00:17:08.366 "nvme_io": false, 00:17:08.366 "nvme_io_md": false, 00:17:08.366 "write_zeroes": true, 00:17:08.366 "zcopy": false, 00:17:08.366 "get_zone_info": false, 00:17:08.366 "zone_management": false, 00:17:08.366 "zone_append": false, 00:17:08.366 "compare": false, 00:17:08.366 "compare_and_write": false, 00:17:08.366 "abort": false, 00:17:08.366 "seek_hole": false, 00:17:08.366 "seek_data": false, 00:17:08.366 "copy": false, 00:17:08.366 "nvme_iov_md": false 00:17:08.366 }, 00:17:08.366 "driver_specific": { 00:17:08.366 "ftl": { 00:17:08.366 "base_bdev": "1e1f0a5f-9b09-4040-90e0-abfb4c7043c4", 00:17:08.366 "cache": "nvc0n1p0" 00:17:08.366 } 00:17:08.366 } 00:17:08.366 } 00:17:08.366 ] 00:17:08.366 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:08.366 05:06:37 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:08.366 05:06:37 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:08.625 05:06:37 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:08.625 05:06:37 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:08.886 [2024-11-28 05:06:37.917048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.886 [2024-11-28 05:06:37.917089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:08.886 [2024-11-28 05:06:37.917102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:08.886 [2024-11-28 05:06:37.917110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.886 [2024-11-28 05:06:37.917141] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:08.886 [2024-11-28 05:06:37.917603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.887 [2024-11-28 05:06:37.917638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:08.887 [2024-11-28 05:06:37.917648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:17:08.887 [2024-11-28 05:06:37.917657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.887 [2024-11-28 05:06:37.918143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.887 [2024-11-28 05:06:37.918167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:08.887 [2024-11-28 05:06:37.918176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:17:08.887 [2024-11-28 05:06:37.918197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.887 [2024-11-28 05:06:37.921423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.887 [2024-11-28 05:06:37.921446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:08.887 [2024-11-28 05:06:37.921455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.204 ms 00:17:08.887 [2024-11-28 05:06:37.921469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.887 [2024-11-28 05:06:37.927740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.887 [2024-11-28 05:06:37.927769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:08.887 [2024-11-28 05:06:37.927779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.245 ms 00:17:08.887 [2024-11-28 05:06:37.927788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.887 [2024-11-28 05:06:37.929503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.887 [2024-11-28 05:06:37.929542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:08.887 [2024-11-28 05:06:37.929551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:17:08.887 [2024-11-28 05:06:37.929560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.887 [2024-11-28 05:06:37.933016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.887 [2024-11-28 05:06:37.933056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:08.887 [2024-11-28 05:06:37.933065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.417 ms 00:17:08.887 [2024-11-28 05:06:37.933073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.887 [2024-11-28 05:06:37.933257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.887 [2024-11-28 05:06:37.933274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:08.887 [2024-11-28 05:06:37.933294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:17:08.887 [2024-11-28 05:06:37.933312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.887 [2024-11-28 05:06:37.934577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.887 [2024-11-28 05:06:37.934611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:08.887 [2024-11-28 05:06:37.934619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:17:08.887 [2024-11-28 05:06:37.934628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.887 [2024-11-28 05:06:37.935631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.887 [2024-11-28 05:06:37.935668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:08.887 [2024-11-28 05:06:37.935676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:17:08.887 [2024-11-28 05:06:37.935685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.887 [2024-11-28 05:06:37.936466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.887 [2024-11-28 05:06:37.936500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:08.887 [2024-11-28 05:06:37.936508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:17:08.887 [2024-11-28 05:06:37.936516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.887 [2024-11-28 05:06:37.937375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.887 [2024-11-28 05:06:37.937408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:08.887 [2024-11-28 05:06:37.937417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:17:08.887 [2024-11-28 05:06:37.937425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.887 [2024-11-28 05:06:37.937468] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:08.887 [2024-11-28 05:06:37.937483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:08.887 [2024-11-28 05:06:37.937933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.937940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.937950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.937958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.937966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.937973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.937982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.937989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.937997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:08.888 [2024-11-28 05:06:37.938369] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:08.888 [2024-11-28 05:06:37.938378] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9cb94b5-949c-457c-9a4a-cee724e91a57 00:17:08.888 [2024-11-28 05:06:37.938387] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:08.888 [2024-11-28 05:06:37.938394] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:08.888 [2024-11-28 05:06:37.938402] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:08.888 [2024-11-28 05:06:37.938410] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:08.888 [2024-11-28 05:06:37.938418] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:08.888 [2024-11-28 05:06:37.938425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:08.888 [2024-11-28 05:06:37.938434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:08.888 [2024-11-28 05:06:37.938440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:08.888 [2024-11-28 05:06:37.938447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:08.888 [2024-11-28 05:06:37.938455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.888 [2024-11-28 05:06:37.938464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:08.888 [2024-11-28 05:06:37.938472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:17:08.888 [2024-11-28 05:06:37.938493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.888 [2024-11-28 05:06:37.939996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.888 [2024-11-28 05:06:37.940024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:08.888 [2024-11-28 05:06:37.940032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:17:08.888 [2024-11-28 05:06:37.940041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.888 [2024-11-28 05:06:37.940125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.888 [2024-11-28 05:06:37.940135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:08.888 [2024-11-28 05:06:37.940144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:08.888 [2024-11-28 05:06:37.940153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.888 [2024-11-28 05:06:37.945443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.888 [2024-11-28 05:06:37.945475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.888 [2024-11-28 05:06:37.945484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.888 [2024-11-28 05:06:37.945493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.888 [2024-11-28 05:06:37.945557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.888 [2024-11-28 05:06:37.945567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.888 [2024-11-28 05:06:37.945577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.888 [2024-11-28 05:06:37.945587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.888 [2024-11-28 05:06:37.945682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.888 [2024-11-28 05:06:37.945702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.888 [2024-11-28 05:06:37.945710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.888 [2024-11-28 05:06:37.945718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.888 [2024-11-28 05:06:37.945748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.888 [2024-11-28 05:06:37.945757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.888 [2024-11-28 05:06:37.945764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.888 [2024-11-28 05:06:37.945783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.888 [2024-11-28 05:06:37.955250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.888 [2024-11-28 05:06:37.955290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.888 [2024-11-28 05:06:37.955299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.888 [2024-11-28 05:06:37.955308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.888 [2024-11-28 05:06:37.962982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.888 [2024-11-28 05:06:37.963022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.888 [2024-11-28 05:06:37.963043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.888 [2024-11-28 05:06:37.963055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.888 [2024-11-28 05:06:37.963122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.888 [2024-11-28 05:06:37.963135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:08.888 [2024-11-28 05:06:37.963142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.888 [2024-11-28 05:06:37.963151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.889 [2024-11-28 05:06:37.963229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.889 [2024-11-28 05:06:37.963251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:08.889 [2024-11-28 05:06:37.963259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.889 [2024-11-28 05:06:37.963278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.889 [2024-11-28 05:06:37.963355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.889 [2024-11-28 05:06:37.963369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:08.889 [2024-11-28 05:06:37.963377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.889 [2024-11-28 05:06:37.963386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.889 [2024-11-28 05:06:37.963435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.889 [2024-11-28 05:06:37.963452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:08.889 [2024-11-28 05:06:37.963468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.889 [2024-11-28 05:06:37.963486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.889 [2024-11-28 05:06:37.963528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.889 [2024-11-28 05:06:37.963540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:08.889 [2024-11-28 05:06:37.963548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.889 [2024-11-28 05:06:37.963566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.889 [2024-11-28 05:06:37.963618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.889 [2024-11-28 05:06:37.963631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:08.889 [2024-11-28 05:06:37.963641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.889 [2024-11-28 05:06:37.963650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.889 [2024-11-28 05:06:37.963822] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.745 ms, result 0 00:17:08.889 true 00:17:08.889 05:06:37 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 86050 00:17:08.889 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 86050 ']' 00:17:08.889 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 86050 00:17:08.889 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:08.889 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.889 05:06:37 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86050 00:17:08.889 killing process with pid 86050 00:17:08.889 05:06:38 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.889 05:06:38 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.889 05:06:38 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86050' 00:17:08.889 05:06:38 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 86050 00:17:08.889 05:06:38 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 86050 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:14.161 05:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:14.161 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:14.161 fio-3.35 00:17:14.161 Starting 1 thread 00:17:19.449 00:17:19.449 test: (groupid=0, jobs=1): err= 0: pid=86208: Thu Nov 28 05:06:48 2024 00:17:19.449 read: IOPS=850, BW=56.5MiB/s (59.2MB/s)(255MiB/4507msec) 00:17:19.449 slat (nsec): min=4162, max=22617, avg=5355.12, stdev=1696.55 00:17:19.449 clat (usec): min=330, max=2524, avg=526.66, stdev=125.29 00:17:19.449 lat (usec): min=334, max=2530, avg=532.02, stdev=125.47 00:17:19.449 clat percentiles (usec): 00:17:19.449 | 1.00th=[ 351], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 474], 00:17:19.449 | 30.00th=[ 478], 40.00th=[ 482], 50.00th=[ 486], 60.00th=[ 494], 00:17:19.449 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 693], 95.00th=[ 840], 00:17:19.449 | 99.00th=[ 922], 99.50th=[ 988], 99.90th=[ 1139], 99.95th=[ 1287], 00:17:19.449 | 99.99th=[ 2540] 00:17:19.449 write: IOPS=856, BW=56.9MiB/s (59.6MB/s)(256MiB/4502msec); 0 zone resets 00:17:19.449 slat (usec): min=14, max=103, avg=19.55, stdev= 3.80 00:17:19.449 clat (usec): min=354, max=1737, avg=610.38, stdev=137.70 00:17:19.449 lat (usec): min=376, max=1758, avg=629.93, stdev=137.77 00:17:19.449 clat percentiles (usec): 00:17:19.449 | 1.00th=[ 424], 5.00th=[ 494], 10.00th=[ 502], 20.00th=[ 545], 00:17:19.449 | 30.00th=[ 570], 40.00th=[ 570], 50.00th=[ 578], 60.00th=[ 578], 00:17:19.449 | 70.00th=[ 586], 80.00th=[ 635], 90.00th=[ 857], 95.00th=[ 922], 00:17:19.449 | 99.00th=[ 1074], 99.50th=[ 1172], 99.90th=[ 1549], 99.95th=[ 1713], 00:17:19.449 | 99.99th=[ 1745] 00:17:19.449 bw ( KiB/s): min=43112, max=63104, per=100.00%, avg=58268.44, stdev=5915.26, samples=9 00:17:19.449 iops : min= 634, max= 928, avg=856.89, stdev=86.99, samples=9 00:17:19.449 lat (usec) : 500=36.48%, 750=52.30%, 1000=10.12% 00:17:19.449 lat (msec) : 2=1.09%, 4=0.01% 00:17:19.449 cpu : usr=99.33%, sys=0.00%, ctx=6, majf=0, minf=1326 00:17:19.449 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:19.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.449 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.449 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:19.449 00:17:19.449 Run status group 0 (all jobs): 00:17:19.449 READ: bw=56.5MiB/s (59.2MB/s), 56.5MiB/s-56.5MiB/s (59.2MB/s-59.2MB/s), io=255MiB (267MB), run=4507-4507msec 00:17:19.449 WRITE: bw=56.9MiB/s (59.6MB/s), 56.9MiB/s-56.9MiB/s (59.6MB/s-59.6MB/s), io=256MiB (269MB), run=4502-4502msec 00:17:20.022 ----------------------------------------------------- 00:17:20.022 Suppressions used: 00:17:20.022 count bytes template 00:17:20.022 1 5 /usr/src/fio/parse.c 00:17:20.022 1 8 libtcmalloc_minimal.so 00:17:20.022 1 904 libcrypto.so 00:17:20.022 ----------------------------------------------------- 00:17:20.022 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:20.022 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:20.284 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:20.284 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:20.284 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:20.284 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:20.284 05:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:20.284 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:20.284 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:20.284 fio-3.35 00:17:20.285 Starting 2 threads 00:17:46.886 00:17:46.886 first_half: (groupid=0, jobs=1): err= 0: pid=86310: Thu Nov 28 05:07:13 2024 00:17:46.886 read: IOPS=2881, BW=11.3MiB/s (11.8MB/s)(256MiB/22728msec) 00:17:46.886 slat (nsec): min=3169, max=39050, avg=4460.78, stdev=1356.27 00:17:46.886 clat (msec): min=12, max=369, avg=37.31, stdev=20.72 00:17:46.886 lat (msec): min=12, max=369, avg=37.31, stdev=20.72 00:17:46.886 clat percentiles (msec): 00:17:46.886 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:17:46.886 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 33], 00:17:46.886 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 65], 00:17:46.886 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 222], 99.95th=[ 259], 00:17:46.886 | 99.99th=[ 342] 00:17:46.886 write: IOPS=2899, BW=11.3MiB/s (11.9MB/s)(256MiB/22600msec); 0 zone resets 00:17:46.886 slat (usec): min=3, max=2628, avg= 6.11, stdev=23.00 00:17:46.886 clat (usec): min=374, max=32787, avg=7089.23, stdev=5982.22 00:17:46.886 lat (usec): min=382, max=32809, avg=7095.34, stdev=5983.62 00:17:46.886 clat percentiles (usec): 00:17:46.886 | 1.00th=[ 824], 5.00th=[ 1778], 10.00th=[ 2409], 20.00th=[ 3261], 00:17:46.886 | 30.00th=[ 3982], 40.00th=[ 4817], 50.00th=[ 5211], 60.00th=[ 5538], 00:17:46.886 | 70.00th=[ 5932], 80.00th=[ 8848], 90.00th=[17695], 95.00th=[21890], 00:17:46.886 | 99.00th=[25822], 99.50th=[27395], 99.90th=[30016], 99.95th=[30540], 00:17:46.886 | 99.99th=[31065] 00:17:46.886 bw ( KiB/s): min= 480, max=42080, per=100.00%, avg=23656.00, stdev=13993.28, samples=22 00:17:46.886 iops : min= 120, max=10520, avg=5914.00, stdev=3498.32, samples=22 00:17:46.886 lat (usec) : 500=0.02%, 750=0.21%, 1000=0.78% 00:17:46.886 lat (msec) : 2=2.22%, 4=11.81%, 10=25.61%, 20=5.89%, 50=50.17% 00:17:46.886 lat (msec) : 100=1.75%, 250=1.51%, 500=0.03% 00:17:46.886 cpu : usr=99.19%, sys=0.18%, ctx=39, majf=0, minf=5557 00:17:46.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:46.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.886 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:46.886 issued rwts: total=65482,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:46.886 second_half: (groupid=0, jobs=1): err= 0: pid=86311: Thu Nov 28 05:07:13 2024 00:17:46.886 read: IOPS=2859, BW=11.2MiB/s (11.7MB/s)(256MiB/22899msec) 00:17:46.886 slat (nsec): min=3038, max=46561, avg=5428.72, stdev=1406.19 00:17:46.886 clat (usec): min=504, max=459913, avg=37197.67, stdev=25922.64 00:17:46.886 lat (usec): min=509, max=459918, avg=37203.10, stdev=25922.77 00:17:46.886 clat percentiles (msec): 00:17:46.886 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 31], 00:17:46.886 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:17:46.886 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 42], 95.00th=[ 72], 00:17:46.886 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 351], 99.95th=[ 418], 00:17:46.886 | 99.99th=[ 456] 00:17:46.886 write: IOPS=2865, BW=11.2MiB/s (11.7MB/s)(256MiB/22868msec); 0 zone resets 00:17:46.886 slat (usec): min=3, max=3371, avg= 6.82, stdev=17.29 00:17:46.886 clat (usec): min=349, max=47430, avg=7537.49, stdev=8120.87 00:17:46.886 lat (usec): min=355, max=47435, avg=7544.31, stdev=8121.46 00:17:46.886 clat percentiles (usec): 00:17:46.886 | 1.00th=[ 758], 5.00th=[ 922], 10.00th=[ 1254], 20.00th=[ 2376], 00:17:46.886 | 30.00th=[ 3228], 40.00th=[ 4146], 50.00th=[ 5145], 60.00th=[ 5669], 00:17:46.886 | 70.00th=[ 6128], 80.00th=[ 9372], 90.00th=[21103], 95.00th=[25560], 00:17:46.886 | 99.00th=[36963], 99.50th=[42730], 99.90th=[45351], 99.95th=[45876], 00:17:46.886 | 99.99th=[46400] 00:17:46.886 bw ( KiB/s): min= 352, max=42224, per=99.03%, avg=22705.39, stdev=13667.86, samples=23 00:17:46.886 iops : min= 88, max=10556, avg=5676.35, stdev=3416.97, samples=23 00:17:46.886 lat (usec) : 500=0.02%, 750=0.43%, 1000=2.86% 00:17:46.886 lat (msec) : 2=5.22%, 4=11.06%, 10=21.90%, 20=4.50%, 50=50.74% 00:17:46.886 lat (msec) : 100=1.51%, 250=1.67%, 500=0.08% 00:17:46.886 cpu : usr=99.23%, sys=0.20%, ctx=49, majf=0, minf=5577 00:17:46.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:46.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.886 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:46.886 issued rwts: total=65480,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:46.886 00:17:46.886 Run status group 0 (all jobs): 00:17:46.886 READ: bw=22.3MiB/s (23.4MB/s), 11.2MiB/s-11.3MiB/s (11.7MB/s-11.8MB/s), io=512MiB (536MB), run=22728-22899msec 00:17:46.886 WRITE: bw=22.4MiB/s (23.5MB/s), 11.2MiB/s-11.3MiB/s (11.7MB/s-11.9MB/s), io=512MiB (537MB), run=22600-22868msec 00:17:46.886 ----------------------------------------------------- 00:17:46.886 Suppressions used: 00:17:46.886 count bytes template 00:17:46.886 2 10 /usr/src/fio/parse.c 00:17:46.886 4 384 /usr/src/fio/iolog.c 00:17:46.886 1 8 libtcmalloc_minimal.so 00:17:46.886 1 904 libcrypto.so 00:17:46.886 ----------------------------------------------------- 00:17:46.886 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:46.886 05:07:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:46.886 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:46.886 fio-3.35 00:17:46.886 Starting 1 thread 00:18:05.024 00:18:05.024 test: (groupid=0, jobs=1): err= 0: pid=86607: Thu Nov 28 05:07:33 2024 00:18:05.024 read: IOPS=6384, BW=24.9MiB/s (26.1MB/s)(255MiB/10213msec) 00:18:05.024 slat (nsec): min=3128, max=40808, avg=5264.99, stdev=1703.21 00:18:05.024 clat (usec): min=554, max=41063, avg=20040.34, stdev=2881.35 00:18:05.024 lat (usec): min=559, max=41067, avg=20045.60, stdev=2881.80 00:18:05.024 clat percentiles (usec): 00:18:05.024 | 1.00th=[15008], 5.00th=[15926], 10.00th=[16712], 20.00th=[17957], 00:18:05.024 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19530], 60.00th=[20317], 00:18:05.024 | 70.00th=[20841], 80.00th=[22152], 90.00th=[23987], 95.00th=[25297], 00:18:05.024 | 99.00th=[28181], 99.50th=[29230], 99.90th=[33162], 99.95th=[36963], 00:18:05.024 | 99.99th=[40109] 00:18:05.024 write: IOPS=9177, BW=35.8MiB/s (37.6MB/s)(256MiB/7141msec); 0 zone resets 00:18:05.024 slat (usec): min=4, max=1305, avg= 8.03, stdev= 8.20 00:18:05.024 clat (usec): min=712, max=68993, avg=13877.11, stdev=15627.92 00:18:05.024 lat (usec): min=718, max=68999, avg=13885.14, stdev=15627.83 00:18:05.024 clat percentiles (usec): 00:18:05.024 | 1.00th=[ 1205], 5.00th=[ 1483], 10.00th=[ 1680], 20.00th=[ 1958], 00:18:05.024 | 30.00th=[ 2245], 40.00th=[ 3064], 50.00th=[ 9241], 60.00th=[11863], 00:18:05.024 | 70.00th=[15139], 80.00th=[17957], 90.00th=[46400], 95.00th=[49021], 00:18:05.024 | 99.00th=[52167], 99.50th=[53216], 99.90th=[55313], 99.95th=[56361], 00:18:05.024 | 99.99th=[65274] 00:18:05.024 bw ( KiB/s): min= 8496, max=51520, per=95.21%, avg=34952.53, stdev=9231.97, samples=15 00:18:05.024 iops : min= 2124, max=12880, avg=8738.13, stdev=2307.99, samples=15 00:18:05.024 lat (usec) : 750=0.01%, 1000=0.10% 00:18:05.024 lat (msec) : 2=10.87%, 4=9.72%, 10=5.97%, 20=43.16%, 50=28.50% 00:18:05.024 lat (msec) : 100=1.67% 00:18:05.024 cpu : usr=98.96%, sys=0.24%, ctx=23, majf=0, minf=5577 00:18:05.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:05.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.024 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:05.024 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:05.024 00:18:05.024 Run status group 0 (all jobs): 00:18:05.024 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=255MiB (267MB), run=10213-10213msec 00:18:05.024 WRITE: bw=35.8MiB/s (37.6MB/s), 35.8MiB/s-35.8MiB/s (37.6MB/s-37.6MB/s), io=256MiB (268MB), run=7141-7141msec 00:18:05.024 ----------------------------------------------------- 00:18:05.024 Suppressions used: 00:18:05.024 count bytes template 00:18:05.024 1 5 /usr/src/fio/parse.c 00:18:05.024 2 192 /usr/src/fio/iolog.c 00:18:05.024 1 8 libtcmalloc_minimal.so 00:18:05.024 1 904 libcrypto.so 00:18:05.024 ----------------------------------------------------- 00:18:05.024 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:05.024 Remove shared memory files 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69076 /dev/shm/spdk_tgt_trace.pid84991 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:05.024 00:18:05.024 real 1m3.199s 00:18:05.024 user 2m19.476s 00:18:05.024 sys 0m2.779s 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.024 05:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:05.024 ************************************ 00:18:05.024 END TEST ftl_fio_basic 00:18:05.024 ************************************ 00:18:05.024 05:07:34 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:05.024 05:07:34 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:05.024 05:07:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.024 05:07:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:05.024 ************************************ 00:18:05.024 START TEST ftl_bdevperf 00:18:05.024 ************************************ 00:18:05.024 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:05.287 * Looking for test storage... 00:18:05.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:05.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.287 --rc genhtml_branch_coverage=1 00:18:05.287 --rc genhtml_function_coverage=1 00:18:05.287 --rc genhtml_legend=1 00:18:05.287 --rc geninfo_all_blocks=1 00:18:05.287 --rc geninfo_unexecuted_blocks=1 00:18:05.287 00:18:05.287 ' 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:05.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.287 --rc genhtml_branch_coverage=1 00:18:05.287 --rc genhtml_function_coverage=1 00:18:05.287 --rc genhtml_legend=1 00:18:05.287 --rc geninfo_all_blocks=1 00:18:05.287 --rc geninfo_unexecuted_blocks=1 00:18:05.287 00:18:05.287 ' 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:05.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.287 --rc genhtml_branch_coverage=1 00:18:05.287 --rc genhtml_function_coverage=1 00:18:05.287 --rc genhtml_legend=1 00:18:05.287 --rc geninfo_all_blocks=1 00:18:05.287 --rc geninfo_unexecuted_blocks=1 00:18:05.287 00:18:05.287 ' 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:05.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.287 --rc genhtml_branch_coverage=1 00:18:05.287 --rc genhtml_function_coverage=1 00:18:05.287 --rc genhtml_legend=1 00:18:05.287 --rc geninfo_all_blocks=1 00:18:05.287 --rc geninfo_unexecuted_blocks=1 00:18:05.287 00:18:05.287 ' 00:18:05.287 05:07:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=86875 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:05.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 86875 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 86875 ']' 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:05.288 05:07:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:05.288 [2024-11-28 05:07:34.538402] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:18:05.288 [2024-11-28 05:07:34.538572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86875 ] 00:18:05.550 [2024-11-28 05:07:34.685779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.550 [2024-11-28 05:07:34.715260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.122 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.122 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:06.122 05:07:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:06.122 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:06.122 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:06.122 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:06.122 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:06.122 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:06.383 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:06.383 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:06.383 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:06.383 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:06.383 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:06.383 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:06.383 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:06.383 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:06.645 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:06.645 { 00:18:06.645 "name": "nvme0n1", 00:18:06.645 "aliases": [ 00:18:06.645 "8cd5db44-f318-4b01-adcd-d00274e10128" 00:18:06.645 ], 00:18:06.645 "product_name": "NVMe disk", 00:18:06.645 "block_size": 4096, 00:18:06.645 "num_blocks": 1310720, 00:18:06.645 "uuid": "8cd5db44-f318-4b01-adcd-d00274e10128", 00:18:06.645 "numa_id": -1, 00:18:06.645 "assigned_rate_limits": { 00:18:06.645 "rw_ios_per_sec": 0, 00:18:06.645 "rw_mbytes_per_sec": 0, 00:18:06.645 "r_mbytes_per_sec": 0, 00:18:06.645 "w_mbytes_per_sec": 0 00:18:06.645 }, 00:18:06.645 "claimed": true, 00:18:06.645 "claim_type": "read_many_write_one", 00:18:06.645 "zoned": false, 00:18:06.645 "supported_io_types": { 00:18:06.645 "read": true, 00:18:06.645 "write": true, 00:18:06.645 "unmap": true, 00:18:06.645 "flush": true, 00:18:06.645 "reset": true, 00:18:06.645 "nvme_admin": true, 00:18:06.645 "nvme_io": true, 00:18:06.645 "nvme_io_md": false, 00:18:06.645 "write_zeroes": true, 00:18:06.645 "zcopy": false, 00:18:06.645 "get_zone_info": false, 00:18:06.645 "zone_management": false, 00:18:06.645 "zone_append": false, 00:18:06.645 "compare": true, 00:18:06.645 "compare_and_write": false, 00:18:06.645 "abort": true, 00:18:06.645 "seek_hole": false, 00:18:06.645 "seek_data": false, 00:18:06.645 "copy": true, 00:18:06.645 "nvme_iov_md": false 00:18:06.645 }, 00:18:06.645 "driver_specific": { 00:18:06.645 "nvme": [ 00:18:06.645 { 00:18:06.645 "pci_address": "0000:00:11.0", 00:18:06.645 "trid": { 00:18:06.645 "trtype": "PCIe", 00:18:06.645 "traddr": "0000:00:11.0" 00:18:06.645 }, 00:18:06.645 "ctrlr_data": { 00:18:06.645 "cntlid": 0, 00:18:06.645 "vendor_id": "0x1b36", 00:18:06.645 "model_number": "QEMU NVMe Ctrl", 00:18:06.645 "serial_number": "12341", 00:18:06.645 "firmware_revision": "8.0.0", 00:18:06.645 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:06.645 "oacs": { 00:18:06.645 "security": 0, 00:18:06.645 "format": 1, 00:18:06.645 "firmware": 0, 00:18:06.645 "ns_manage": 1 00:18:06.645 }, 00:18:06.645 "multi_ctrlr": false, 00:18:06.645 "ana_reporting": false 00:18:06.645 }, 00:18:06.645 "vs": { 00:18:06.645 "nvme_version": "1.4" 00:18:06.645 }, 00:18:06.645 "ns_data": { 00:18:06.645 "id": 1, 00:18:06.645 "can_share": false 00:18:06.645 } 00:18:06.645 } 00:18:06.645 ], 00:18:06.645 "mp_policy": "active_passive" 00:18:06.645 } 00:18:06.645 } 00:18:06.645 ]' 00:18:06.645 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:06.645 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:06.645 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:06.645 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:06.645 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:06.645 05:07:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:06.645 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:06.645 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:06.645 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:06.645 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:06.645 05:07:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:06.907 05:07:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=9359defc-9ce4-444d-ad4b-d6a8d41989c4 00:18:06.907 05:07:36 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:06.907 05:07:36 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9359defc-9ce4-444d-ad4b-d6a8d41989c4 00:18:07.169 05:07:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:07.430 05:07:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=00a19742-c4c4-4c06-a306-60d2ec3075dc 00:18:07.430 05:07:36 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 00a19742-c4c4-4c06-a306-60d2ec3075dc 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=8353e525-e67b-4ca7-add2-e3e74964acf5 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8353e525-e67b-4ca7-add2-e3e74964acf5 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=8353e525-e67b-4ca7-add2-e3e74964acf5 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 8353e525-e67b-4ca7-add2-e3e74964acf5 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8353e525-e67b-4ca7-add2-e3e74964acf5 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8353e525-e67b-4ca7-add2-e3e74964acf5 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:07.692 { 00:18:07.692 "name": "8353e525-e67b-4ca7-add2-e3e74964acf5", 00:18:07.692 "aliases": [ 00:18:07.692 "lvs/nvme0n1p0" 00:18:07.692 ], 00:18:07.692 "product_name": "Logical Volume", 00:18:07.692 "block_size": 4096, 00:18:07.692 "num_blocks": 26476544, 00:18:07.692 "uuid": "8353e525-e67b-4ca7-add2-e3e74964acf5", 00:18:07.692 "assigned_rate_limits": { 00:18:07.692 "rw_ios_per_sec": 0, 00:18:07.692 "rw_mbytes_per_sec": 0, 00:18:07.692 "r_mbytes_per_sec": 0, 00:18:07.692 "w_mbytes_per_sec": 0 00:18:07.692 }, 00:18:07.692 "claimed": false, 00:18:07.692 "zoned": false, 00:18:07.692 "supported_io_types": { 00:18:07.692 "read": true, 00:18:07.692 "write": true, 00:18:07.692 "unmap": true, 00:18:07.692 "flush": false, 00:18:07.692 "reset": true, 00:18:07.692 "nvme_admin": false, 00:18:07.692 "nvme_io": false, 00:18:07.692 "nvme_io_md": false, 00:18:07.692 "write_zeroes": true, 00:18:07.692 "zcopy": false, 00:18:07.692 "get_zone_info": false, 00:18:07.692 "zone_management": false, 00:18:07.692 "zone_append": false, 00:18:07.692 "compare": false, 00:18:07.692 "compare_and_write": false, 00:18:07.692 "abort": false, 00:18:07.692 "seek_hole": true, 00:18:07.692 "seek_data": true, 00:18:07.692 "copy": false, 00:18:07.692 "nvme_iov_md": false 00:18:07.692 }, 00:18:07.692 "driver_specific": { 00:18:07.692 "lvol": { 00:18:07.692 "lvol_store_uuid": "00a19742-c4c4-4c06-a306-60d2ec3075dc", 00:18:07.692 "base_bdev": "nvme0n1", 00:18:07.692 "thin_provision": true, 00:18:07.692 "num_allocated_clusters": 0, 00:18:07.692 "snapshot": false, 00:18:07.692 "clone": false, 00:18:07.692 "esnap_clone": false 00:18:07.692 } 00:18:07.692 } 00:18:07.692 } 00:18:07.692 ]' 00:18:07.692 05:07:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:07.952 05:07:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:07.952 05:07:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:07.952 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:07.952 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:07.952 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:07.952 05:07:37 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:07.952 05:07:37 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:07.952 05:07:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:08.211 05:07:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:08.211 05:07:37 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:08.211 05:07:37 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 8353e525-e67b-4ca7-add2-e3e74964acf5 00:18:08.211 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8353e525-e67b-4ca7-add2-e3e74964acf5 00:18:08.211 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:08.211 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:08.211 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:08.211 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8353e525-e67b-4ca7-add2-e3e74964acf5 00:18:08.211 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:08.211 { 00:18:08.211 "name": "8353e525-e67b-4ca7-add2-e3e74964acf5", 00:18:08.211 "aliases": [ 00:18:08.211 "lvs/nvme0n1p0" 00:18:08.211 ], 00:18:08.211 "product_name": "Logical Volume", 00:18:08.211 "block_size": 4096, 00:18:08.211 "num_blocks": 26476544, 00:18:08.211 "uuid": "8353e525-e67b-4ca7-add2-e3e74964acf5", 00:18:08.211 "assigned_rate_limits": { 00:18:08.211 "rw_ios_per_sec": 0, 00:18:08.211 "rw_mbytes_per_sec": 0, 00:18:08.211 "r_mbytes_per_sec": 0, 00:18:08.211 "w_mbytes_per_sec": 0 00:18:08.211 }, 00:18:08.211 "claimed": false, 00:18:08.211 "zoned": false, 00:18:08.211 "supported_io_types": { 00:18:08.211 "read": true, 00:18:08.211 "write": true, 00:18:08.211 "unmap": true, 00:18:08.211 "flush": false, 00:18:08.211 "reset": true, 00:18:08.211 "nvme_admin": false, 00:18:08.211 "nvme_io": false, 00:18:08.211 "nvme_io_md": false, 00:18:08.211 "write_zeroes": true, 00:18:08.211 "zcopy": false, 00:18:08.212 "get_zone_info": false, 00:18:08.212 "zone_management": false, 00:18:08.212 "zone_append": false, 00:18:08.212 "compare": false, 00:18:08.212 "compare_and_write": false, 00:18:08.212 "abort": false, 00:18:08.212 "seek_hole": true, 00:18:08.212 "seek_data": true, 00:18:08.212 "copy": false, 00:18:08.212 "nvme_iov_md": false 00:18:08.212 }, 00:18:08.212 "driver_specific": { 00:18:08.212 "lvol": { 00:18:08.212 "lvol_store_uuid": "00a19742-c4c4-4c06-a306-60d2ec3075dc", 00:18:08.212 "base_bdev": "nvme0n1", 00:18:08.212 "thin_provision": true, 00:18:08.212 "num_allocated_clusters": 0, 00:18:08.212 "snapshot": false, 00:18:08.212 "clone": false, 00:18:08.212 "esnap_clone": false 00:18:08.212 } 00:18:08.212 } 00:18:08.212 } 00:18:08.212 ]' 00:18:08.212 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:08.470 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 8353e525-e67b-4ca7-add2-e3e74964acf5 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8353e525-e67b-4ca7-add2-e3e74964acf5 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:08.471 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8353e525-e67b-4ca7-add2-e3e74964acf5 00:18:08.729 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:08.729 { 00:18:08.729 "name": "8353e525-e67b-4ca7-add2-e3e74964acf5", 00:18:08.729 "aliases": [ 00:18:08.729 "lvs/nvme0n1p0" 00:18:08.729 ], 00:18:08.729 "product_name": "Logical Volume", 00:18:08.729 "block_size": 4096, 00:18:08.729 "num_blocks": 26476544, 00:18:08.729 "uuid": "8353e525-e67b-4ca7-add2-e3e74964acf5", 00:18:08.729 "assigned_rate_limits": { 00:18:08.729 "rw_ios_per_sec": 0, 00:18:08.729 "rw_mbytes_per_sec": 0, 00:18:08.729 "r_mbytes_per_sec": 0, 00:18:08.729 "w_mbytes_per_sec": 0 00:18:08.729 }, 00:18:08.729 "claimed": false, 00:18:08.729 "zoned": false, 00:18:08.729 "supported_io_types": { 00:18:08.729 "read": true, 00:18:08.729 "write": true, 00:18:08.729 "unmap": true, 00:18:08.729 "flush": false, 00:18:08.729 "reset": true, 00:18:08.729 "nvme_admin": false, 00:18:08.729 "nvme_io": false, 00:18:08.730 "nvme_io_md": false, 00:18:08.730 "write_zeroes": true, 00:18:08.730 "zcopy": false, 00:18:08.730 "get_zone_info": false, 00:18:08.730 "zone_management": false, 00:18:08.730 "zone_append": false, 00:18:08.730 "compare": false, 00:18:08.730 "compare_and_write": false, 00:18:08.730 "abort": false, 00:18:08.730 "seek_hole": true, 00:18:08.730 "seek_data": true, 00:18:08.730 "copy": false, 00:18:08.730 "nvme_iov_md": false 00:18:08.730 }, 00:18:08.730 "driver_specific": { 00:18:08.730 "lvol": { 00:18:08.730 "lvol_store_uuid": "00a19742-c4c4-4c06-a306-60d2ec3075dc", 00:18:08.730 "base_bdev": "nvme0n1", 00:18:08.730 "thin_provision": true, 00:18:08.730 "num_allocated_clusters": 0, 00:18:08.730 "snapshot": false, 00:18:08.730 "clone": false, 00:18:08.730 "esnap_clone": false 00:18:08.730 } 00:18:08.730 } 00:18:08.730 } 00:18:08.730 ]' 00:18:08.730 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:08.730 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:08.730 05:07:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:08.730 05:07:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:08.730 05:07:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:08.730 05:07:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:08.730 05:07:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:08.730 05:07:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8353e525-e67b-4ca7-add2-e3e74964acf5 -c nvc0n1p0 --l2p_dram_limit 20 00:18:08.991 [2024-11-28 05:07:38.197488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.991 [2024-11-28 05:07:38.197524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:08.991 [2024-11-28 05:07:38.197539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:08.991 [2024-11-28 05:07:38.197546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.991 [2024-11-28 05:07:38.197590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.991 [2024-11-28 05:07:38.197600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.991 [2024-11-28 05:07:38.197609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:08.991 [2024-11-28 05:07:38.197617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.991 [2024-11-28 05:07:38.197634] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:08.991 [2024-11-28 05:07:38.197867] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:08.991 [2024-11-28 05:07:38.197888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.991 [2024-11-28 05:07:38.197896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.991 [2024-11-28 05:07:38.197903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:18:08.991 [2024-11-28 05:07:38.197910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.991 [2024-11-28 05:07:38.197933] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 894852a0-6cba-46c7-8177-93afdf495e08 00:18:08.991 [2024-11-28 05:07:38.198855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.991 [2024-11-28 05:07:38.198883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:08.991 [2024-11-28 05:07:38.198890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:08.991 [2024-11-28 05:07:38.198899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.991 [2024-11-28 05:07:38.203525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.991 [2024-11-28 05:07:38.203551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.991 [2024-11-28 05:07:38.203558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.590 ms 00:18:08.991 [2024-11-28 05:07:38.203570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.991 [2024-11-28 05:07:38.203659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.991 [2024-11-28 05:07:38.203670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.991 [2024-11-28 05:07:38.203680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:08.991 [2024-11-28 05:07:38.203687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.991 [2024-11-28 05:07:38.203722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.991 [2024-11-28 05:07:38.203732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:08.991 [2024-11-28 05:07:38.203738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:08.991 [2024-11-28 05:07:38.203751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.991 [2024-11-28 05:07:38.203766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:08.991 [2024-11-28 05:07:38.205000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.991 [2024-11-28 05:07:38.205029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.991 [2024-11-28 05:07:38.205037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:18:08.991 [2024-11-28 05:07:38.205043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.991 [2024-11-28 05:07:38.205066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.991 [2024-11-28 05:07:38.205074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:08.991 [2024-11-28 05:07:38.205085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:08.991 [2024-11-28 05:07:38.205091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.991 [2024-11-28 05:07:38.205104] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:08.991 [2024-11-28 05:07:38.205219] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:08.991 [2024-11-28 05:07:38.205234] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:08.991 [2024-11-28 05:07:38.205242] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:08.991 [2024-11-28 05:07:38.205253] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:08.991 [2024-11-28 05:07:38.205260] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:08.991 [2024-11-28 05:07:38.205270] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:08.991 [2024-11-28 05:07:38.205276] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:08.992 [2024-11-28 05:07:38.205284] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:08.992 [2024-11-28 05:07:38.205290] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:08.992 [2024-11-28 05:07:38.205297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.992 [2024-11-28 05:07:38.205303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:08.992 [2024-11-28 05:07:38.205313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:18:08.992 [2024-11-28 05:07:38.205319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.992 [2024-11-28 05:07:38.205384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.992 [2024-11-28 05:07:38.205398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:08.992 [2024-11-28 05:07:38.205406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:08.992 [2024-11-28 05:07:38.205411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.992 [2024-11-28 05:07:38.205493] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:08.992 [2024-11-28 05:07:38.205506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:08.992 [2024-11-28 05:07:38.205514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.992 [2024-11-28 05:07:38.205522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:08.992 [2024-11-28 05:07:38.205535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:08.992 [2024-11-28 05:07:38.205547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:08.992 [2024-11-28 05:07:38.205554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.992 [2024-11-28 05:07:38.205565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:08.992 [2024-11-28 05:07:38.205571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:08.992 [2024-11-28 05:07:38.205578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.992 [2024-11-28 05:07:38.205584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:08.992 [2024-11-28 05:07:38.205590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:08.992 [2024-11-28 05:07:38.205596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:08.992 [2024-11-28 05:07:38.205609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:08.992 [2024-11-28 05:07:38.205615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:08.992 [2024-11-28 05:07:38.205629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.992 [2024-11-28 05:07:38.205642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:08.992 [2024-11-28 05:07:38.205647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.992 [2024-11-28 05:07:38.205658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:08.992 [2024-11-28 05:07:38.205664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.992 [2024-11-28 05:07:38.205692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:08.992 [2024-11-28 05:07:38.205698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.992 [2024-11-28 05:07:38.205709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:08.992 [2024-11-28 05:07:38.205716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.992 [2024-11-28 05:07:38.205727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:08.992 [2024-11-28 05:07:38.205733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:08.992 [2024-11-28 05:07:38.205740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.992 [2024-11-28 05:07:38.205745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:08.992 [2024-11-28 05:07:38.205752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:08.992 [2024-11-28 05:07:38.205757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:08.992 [2024-11-28 05:07:38.205769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:08.992 [2024-11-28 05:07:38.205776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205781] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:08.992 [2024-11-28 05:07:38.205790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:08.992 [2024-11-28 05:07:38.205796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.992 [2024-11-28 05:07:38.205803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.992 [2024-11-28 05:07:38.205809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:08.992 [2024-11-28 05:07:38.205817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:08.992 [2024-11-28 05:07:38.205822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:08.992 [2024-11-28 05:07:38.205830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:08.992 [2024-11-28 05:07:38.205835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:08.992 [2024-11-28 05:07:38.205843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:08.992 [2024-11-28 05:07:38.205851] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:08.992 [2024-11-28 05:07:38.205860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.992 [2024-11-28 05:07:38.205869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:08.992 [2024-11-28 05:07:38.205877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:08.992 [2024-11-28 05:07:38.205883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:08.992 [2024-11-28 05:07:38.205890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:08.992 [2024-11-28 05:07:38.205897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:08.992 [2024-11-28 05:07:38.205905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:08.992 [2024-11-28 05:07:38.205911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:08.992 [2024-11-28 05:07:38.205922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:08.992 [2024-11-28 05:07:38.205928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:08.992 [2024-11-28 05:07:38.205934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:08.992 [2024-11-28 05:07:38.205940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:08.992 [2024-11-28 05:07:38.205946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:08.992 [2024-11-28 05:07:38.205951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:08.992 [2024-11-28 05:07:38.205958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:08.992 [2024-11-28 05:07:38.205964] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:08.992 [2024-11-28 05:07:38.205972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.992 [2024-11-28 05:07:38.205978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:08.992 [2024-11-28 05:07:38.205985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:08.992 [2024-11-28 05:07:38.205992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:08.992 [2024-11-28 05:07:38.205998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:08.992 [2024-11-28 05:07:38.206003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.992 [2024-11-28 05:07:38.206012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:08.992 [2024-11-28 05:07:38.206019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:18:08.992 [2024-11-28 05:07:38.206026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.992 [2024-11-28 05:07:38.206051] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:08.992 [2024-11-28 05:07:38.206062] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:13.191 [2024-11-28 05:07:41.969076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.191 [2024-11-28 05:07:41.969201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:13.191 [2024-11-28 05:07:41.969220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3763.008 ms 00:18:13.191 [2024-11-28 05:07:41.969232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.191 [2024-11-28 05:07:41.983570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.191 [2024-11-28 05:07:41.983633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.191 [2024-11-28 05:07:41.983650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.218 ms 00:18:13.191 [2024-11-28 05:07:41.983665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.191 [2024-11-28 05:07:41.983785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.191 [2024-11-28 05:07:41.983804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:13.191 [2024-11-28 05:07:41.983814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:13.191 [2024-11-28 05:07:41.983825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.191 [2024-11-28 05:07:42.006744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.191 [2024-11-28 05:07:42.006821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.191 [2024-11-28 05:07:42.006838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.856 ms 00:18:13.191 [2024-11-28 05:07:42.006851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.191 [2024-11-28 05:07:42.006903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.191 [2024-11-28 05:07:42.006917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.191 [2024-11-28 05:07:42.006929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:13.191 [2024-11-28 05:07:42.006943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.191 [2024-11-28 05:07:42.007571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.191 [2024-11-28 05:07:42.007619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.191 [2024-11-28 05:07:42.007636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:18:13.191 [2024-11-28 05:07:42.007653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.191 [2024-11-28 05:07:42.007806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.191 [2024-11-28 05:07:42.007834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.191 [2024-11-28 05:07:42.007847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:13.191 [2024-11-28 05:07:42.007862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.191 [2024-11-28 05:07:42.015891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.191 [2024-11-28 05:07:42.015945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.191 [2024-11-28 05:07:42.015956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.006 ms 00:18:13.191 [2024-11-28 05:07:42.015966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.191 [2024-11-28 05:07:42.026009] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:13.191 [2024-11-28 05:07:42.033613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.191 [2024-11-28 05:07:42.033655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:13.191 [2024-11-28 05:07:42.033669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.568 ms 00:18:13.191 [2024-11-28 05:07:42.033699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.191 [2024-11-28 05:07:42.128752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.191 [2024-11-28 05:07:42.128822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:13.191 [2024-11-28 05:07:42.128848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.016 ms 00:18:13.191 [2024-11-28 05:07:42.128860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.191 [2024-11-28 05:07:42.129072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.191 [2024-11-28 05:07:42.129084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:13.191 [2024-11-28 05:07:42.129101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:18:13.191 [2024-11-28 05:07:42.129109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.192 [2024-11-28 05:07:42.135877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.192 [2024-11-28 05:07:42.135930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:13.192 [2024-11-28 05:07:42.135945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.740 ms 00:18:13.192 [2024-11-28 05:07:42.135962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.192 [2024-11-28 05:07:42.141566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.192 [2024-11-28 05:07:42.141620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:13.192 [2024-11-28 05:07:42.141634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.546 ms 00:18:13.192 [2024-11-28 05:07:42.141641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.192 [2024-11-28 05:07:42.142013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.192 [2024-11-28 05:07:42.142034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:13.192 [2024-11-28 05:07:42.142049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:18:13.192 [2024-11-28 05:07:42.142066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.192 [2024-11-28 05:07:42.191552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.192 [2024-11-28 05:07:42.191618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:13.192 [2024-11-28 05:07:42.191634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.457 ms 00:18:13.192 [2024-11-28 05:07:42.191646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.192 [2024-11-28 05:07:42.199805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.192 [2024-11-28 05:07:42.199861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:13.192 [2024-11-28 05:07:42.199876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.091 ms 00:18:13.192 [2024-11-28 05:07:42.199884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.192 [2024-11-28 05:07:42.206686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.192 [2024-11-28 05:07:42.206743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:13.192 [2024-11-28 05:07:42.206756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.746 ms 00:18:13.192 [2024-11-28 05:07:42.206764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.192 [2024-11-28 05:07:42.213847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.192 [2024-11-28 05:07:42.213901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:13.192 [2024-11-28 05:07:42.213917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.026 ms 00:18:13.192 [2024-11-28 05:07:42.213925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.192 [2024-11-28 05:07:42.213984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.192 [2024-11-28 05:07:42.213998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:13.192 [2024-11-28 05:07:42.214009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:13.192 [2024-11-28 05:07:42.214018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.192 [2024-11-28 05:07:42.214112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.192 [2024-11-28 05:07:42.214123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:13.192 [2024-11-28 05:07:42.214137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:13.192 [2024-11-28 05:07:42.214145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.192 [2024-11-28 05:07:42.215376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4017.321 ms, result 0 00:18:13.192 { 00:18:13.192 "name": "ftl0", 00:18:13.192 "uuid": "894852a0-6cba-46c7-8177-93afdf495e08" 00:18:13.192 } 00:18:13.192 05:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:13.192 05:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:13.192 05:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:13.192 05:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:13.454 [2024-11-28 05:07:42.554887] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:13.454 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:13.454 Zero copy mechanism will not be used. 00:18:13.454 Running I/O for 4 seconds... 00:18:15.343 789.00 IOPS, 52.39 MiB/s [2024-11-28T05:07:45.571Z] 778.00 IOPS, 51.66 MiB/s [2024-11-28T05:07:46.948Z] 824.67 IOPS, 54.76 MiB/s [2024-11-28T05:07:46.948Z] 969.75 IOPS, 64.40 MiB/s 00:18:17.664 Latency(us) 00:18:17.664 [2024-11-28T05:07:46.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.664 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:17.664 ftl0 : 4.00 969.69 64.39 0.00 0.00 1090.38 175.66 2558.42 00:18:17.664 [2024-11-28T05:07:46.948Z] =================================================================================================================== 00:18:17.664 [2024-11-28T05:07:46.948Z] Total : 969.69 64.39 0.00 0.00 1090.38 175.66 2558.42 00:18:17.664 [2024-11-28 05:07:46.562875] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:17.664 { 00:18:17.664 "results": [ 00:18:17.664 { 00:18:17.664 "job": "ftl0", 00:18:17.664 "core_mask": "0x1", 00:18:17.664 "workload": "randwrite", 00:18:17.664 "status": "finished", 00:18:17.664 "queue_depth": 1, 00:18:17.664 "io_size": 69632, 00:18:17.664 "runtime": 4.001293, 00:18:17.664 "iops": 969.686548823093, 00:18:17.664 "mibps": 64.39324738278351, 00:18:17.664 "io_failed": 0, 00:18:17.664 "io_timeout": 0, 00:18:17.664 "avg_latency_us": 1090.3811451229183, 00:18:17.664 "min_latency_us": 175.6553846153846, 00:18:17.664 "max_latency_us": 2558.424615384615 00:18:17.664 } 00:18:17.664 ], 00:18:17.664 "core_count": 1 00:18:17.664 } 00:18:17.664 05:07:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:17.664 [2024-11-28 05:07:46.667726] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:17.664 Running I/O for 4 seconds... 00:18:19.548 8203.00 IOPS, 32.04 MiB/s [2024-11-28T05:07:49.777Z] 7280.50 IOPS, 28.44 MiB/s [2024-11-28T05:07:50.717Z] 6641.00 IOPS, 25.94 MiB/s [2024-11-28T05:07:50.717Z] 6618.75 IOPS, 25.85 MiB/s 00:18:21.433 Latency(us) 00:18:21.433 [2024-11-28T05:07:50.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.433 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.433 ftl0 : 4.04 6585.46 25.72 0.00 0.00 19360.89 270.97 57268.38 00:18:21.433 [2024-11-28T05:07:50.717Z] =================================================================================================================== 00:18:21.433 [2024-11-28T05:07:50.717Z] Total : 6585.46 25.72 0.00 0.00 19360.89 0.00 57268.38 00:18:21.433 [2024-11-28 05:07:50.712302] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:21.433 { 00:18:21.433 "results": [ 00:18:21.433 { 00:18:21.433 "job": "ftl0", 00:18:21.433 "core_mask": "0x1", 00:18:21.433 "workload": "randwrite", 00:18:21.433 "status": "finished", 00:18:21.433 "queue_depth": 128, 00:18:21.433 "io_size": 4096, 00:18:21.433 "runtime": 4.03844, 00:18:21.433 "iops": 6585.463693901605, 00:18:21.433 "mibps": 25.724467554303146, 00:18:21.433 "io_failed": 0, 00:18:21.433 "io_timeout": 0, 00:18:21.433 "avg_latency_us": 19360.885628125587, 00:18:21.433 "min_latency_us": 270.9661538461539, 00:18:21.433 "max_latency_us": 57268.38153846154 00:18:21.433 } 00:18:21.433 ], 00:18:21.433 "core_count": 1 00:18:21.433 } 00:18:21.695 05:07:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:21.695 [2024-11-28 05:07:50.827159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:21.695 Running I/O for 4 seconds... 00:18:23.584 4421.00 IOPS, 17.27 MiB/s [2024-11-28T05:07:54.257Z] 4819.50 IOPS, 18.83 MiB/s [2024-11-28T05:07:55.200Z] 4813.00 IOPS, 18.80 MiB/s [2024-11-28T05:07:55.200Z] 5140.75 IOPS, 20.08 MiB/s 00:18:25.916 Latency(us) 00:18:25.916 [2024-11-28T05:07:55.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.917 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:25.917 Verification LBA range: start 0x0 length 0x1400000 00:18:25.917 ftl0 : 4.02 5151.90 20.12 0.00 0.00 24771.05 280.42 65334.35 00:18:25.917 [2024-11-28T05:07:55.201Z] =================================================================================================================== 00:18:25.917 [2024-11-28T05:07:55.201Z] Total : 5151.90 20.12 0.00 0.00 24771.05 0.00 65334.35 00:18:25.917 [2024-11-28 05:07:54.851274] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:25.917 { 00:18:25.917 "results": [ 00:18:25.917 { 00:18:25.917 "job": "ftl0", 00:18:25.917 "core_mask": "0x1", 00:18:25.917 "workload": "verify", 00:18:25.917 "status": "finished", 00:18:25.917 "verify_range": { 00:18:25.917 "start": 0, 00:18:25.917 "length": 20971520 00:18:25.917 }, 00:18:25.917 "queue_depth": 128, 00:18:25.917 "io_size": 4096, 00:18:25.917 "runtime": 4.015414, 00:18:25.917 "iops": 5151.897164277457, 00:18:25.917 "mibps": 20.124598297958816, 00:18:25.917 "io_failed": 0, 00:18:25.917 "io_timeout": 0, 00:18:25.917 "avg_latency_us": 24771.04829536201, 00:18:25.917 "min_latency_us": 280.41846153846154, 00:18:25.917 "max_latency_us": 65334.35076923077 00:18:25.917 } 00:18:25.917 ], 00:18:25.917 "core_count": 1 00:18:25.917 } 00:18:25.917 05:07:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:25.917 [2024-11-28 05:07:55.055615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.917 [2024-11-28 05:07:55.055675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:25.917 [2024-11-28 05:07:55.055692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:25.917 [2024-11-28 05:07:55.055707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.917 [2024-11-28 05:07:55.055732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:25.917 [2024-11-28 05:07:55.056495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.917 [2024-11-28 05:07:55.056543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:25.917 [2024-11-28 05:07:55.056555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:18:25.917 [2024-11-28 05:07:55.056566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.917 [2024-11-28 05:07:55.059605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.917 [2024-11-28 05:07:55.059656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:25.917 [2024-11-28 05:07:55.059667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.015 ms 00:18:25.917 [2024-11-28 05:07:55.059681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.183 [2024-11-28 05:07:55.276505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.183 [2024-11-28 05:07:55.276578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:26.183 [2024-11-28 05:07:55.276593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 216.804 ms 00:18:26.183 [2024-11-28 05:07:55.276605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.183 [2024-11-28 05:07:55.282805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.183 [2024-11-28 05:07:55.282849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:26.184 [2024-11-28 05:07:55.282863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.158 ms 00:18:26.184 [2024-11-28 05:07:55.282875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.184 [2024-11-28 05:07:55.285829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.184 [2024-11-28 05:07:55.285879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:26.184 [2024-11-28 05:07:55.285890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.896 ms 00:18:26.184 [2024-11-28 05:07:55.285904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.184 [2024-11-28 05:07:55.292519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.184 [2024-11-28 05:07:55.292578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:26.184 [2024-11-28 05:07:55.292589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.574 ms 00:18:26.184 [2024-11-28 05:07:55.292604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.184 [2024-11-28 05:07:55.292725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.184 [2024-11-28 05:07:55.292738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:26.184 [2024-11-28 05:07:55.292747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:26.184 [2024-11-28 05:07:55.292757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.184 [2024-11-28 05:07:55.295736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.184 [2024-11-28 05:07:55.295787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:26.184 [2024-11-28 05:07:55.295798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.959 ms 00:18:26.184 [2024-11-28 05:07:55.295808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.184 [2024-11-28 05:07:55.298730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.184 [2024-11-28 05:07:55.298781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:26.184 [2024-11-28 05:07:55.298790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.880 ms 00:18:26.184 [2024-11-28 05:07:55.298800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.184 [2024-11-28 05:07:55.301133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.184 [2024-11-28 05:07:55.301196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:26.184 [2024-11-28 05:07:55.301206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.294 ms 00:18:26.184 [2024-11-28 05:07:55.301219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.184 [2024-11-28 05:07:55.303461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.184 [2024-11-28 05:07:55.303514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:26.184 [2024-11-28 05:07:55.303523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.180 ms 00:18:26.184 [2024-11-28 05:07:55.303533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.184 [2024-11-28 05:07:55.303572] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:26.184 [2024-11-28 05:07:55.303590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.303999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:26.184 [2024-11-28 05:07:55.304300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:26.185 [2024-11-28 05:07:55.304526] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:26.185 [2024-11-28 05:07:55.304540] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 894852a0-6cba-46c7-8177-93afdf495e08 00:18:26.185 [2024-11-28 05:07:55.304559] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:26.185 [2024-11-28 05:07:55.304568] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:26.185 [2024-11-28 05:07:55.304578] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:26.185 [2024-11-28 05:07:55.304586] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:26.185 [2024-11-28 05:07:55.304598] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:26.185 [2024-11-28 05:07:55.304606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:26.185 [2024-11-28 05:07:55.304620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:26.185 [2024-11-28 05:07:55.304627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:26.185 [2024-11-28 05:07:55.304636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:26.185 [2024-11-28 05:07:55.304643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.185 [2024-11-28 05:07:55.304653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:26.185 [2024-11-28 05:07:55.304665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:18:26.185 [2024-11-28 05:07:55.304675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.306867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.185 [2024-11-28 05:07:55.306911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:26.185 [2024-11-28 05:07:55.306921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.174 ms 00:18:26.185 [2024-11-28 05:07:55.306932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.307061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.185 [2024-11-28 05:07:55.307073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:26.185 [2024-11-28 05:07:55.307084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:26.185 [2024-11-28 05:07:55.307097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.314512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.185 [2024-11-28 05:07:55.314563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:26.185 [2024-11-28 05:07:55.314574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.185 [2024-11-28 05:07:55.314585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.314648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.185 [2024-11-28 05:07:55.314660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:26.185 [2024-11-28 05:07:55.314668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.185 [2024-11-28 05:07:55.314678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.314736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.185 [2024-11-28 05:07:55.314749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:26.185 [2024-11-28 05:07:55.314757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.185 [2024-11-28 05:07:55.314766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.314794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.185 [2024-11-28 05:07:55.314807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:26.185 [2024-11-28 05:07:55.314814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.185 [2024-11-28 05:07:55.314827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.328478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.185 [2024-11-28 05:07:55.328544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:26.185 [2024-11-28 05:07:55.328556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.185 [2024-11-28 05:07:55.328567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.340265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.185 [2024-11-28 05:07:55.340325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:26.185 [2024-11-28 05:07:55.340336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.185 [2024-11-28 05:07:55.340347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.340418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.185 [2024-11-28 05:07:55.340432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:26.185 [2024-11-28 05:07:55.340441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.185 [2024-11-28 05:07:55.340452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.340495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.185 [2024-11-28 05:07:55.340508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:26.185 [2024-11-28 05:07:55.340518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.185 [2024-11-28 05:07:55.340532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.340614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.185 [2024-11-28 05:07:55.340628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:26.185 [2024-11-28 05:07:55.340636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.185 [2024-11-28 05:07:55.340646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.340682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.185 [2024-11-28 05:07:55.340695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:26.185 [2024-11-28 05:07:55.340703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.185 [2024-11-28 05:07:55.340716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.340754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.185 [2024-11-28 05:07:55.340766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:26.185 [2024-11-28 05:07:55.340774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.185 [2024-11-28 05:07:55.340784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.340829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.185 [2024-11-28 05:07:55.340842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:26.185 [2024-11-28 05:07:55.340851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.185 [2024-11-28 05:07:55.340866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.185 [2024-11-28 05:07:55.341005] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 285.352 ms, result 0 00:18:26.185 true 00:18:26.185 05:07:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 86875 00:18:26.185 05:07:55 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 86875 ']' 00:18:26.185 05:07:55 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 86875 00:18:26.185 05:07:55 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:18:26.185 05:07:55 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.185 05:07:55 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86875 00:18:26.185 killing process with pid 86875 00:18:26.185 Received shutdown signal, test time was about 4.000000 seconds 00:18:26.185 00:18:26.185 Latency(us) 00:18:26.185 [2024-11-28T05:07:55.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.185 [2024-11-28T05:07:55.469Z] =================================================================================================================== 00:18:26.185 [2024-11-28T05:07:55.470Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.186 05:07:55 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.186 05:07:55 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.186 05:07:55 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86875' 00:18:26.186 05:07:55 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 86875 00:18:26.186 05:07:55 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 86875 00:18:26.447 Remove shared memory files 00:18:26.447 05:07:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:26.447 05:07:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:26.447 05:07:55 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:26.447 05:07:55 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:26.447 05:07:55 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:26.447 05:07:55 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:26.447 05:07:55 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:26.447 05:07:55 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:26.447 ************************************ 00:18:26.447 END TEST ftl_bdevperf 00:18:26.447 ************************************ 00:18:26.447 00:18:26.447 real 0m21.358s 00:18:26.447 user 0m23.914s 00:18:26.447 sys 0m0.920s 00:18:26.447 05:07:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.447 05:07:55 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:26.447 05:07:55 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:26.447 05:07:55 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:26.447 05:07:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.447 05:07:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:26.447 ************************************ 00:18:26.447 START TEST ftl_trim 00:18:26.447 ************************************ 00:18:26.447 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:26.744 * Looking for test storage... 00:18:26.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:26.744 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:26.744 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:18:26.744 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:26.744 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.744 05:07:55 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:26.744 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.744 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:26.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.744 --rc genhtml_branch_coverage=1 00:18:26.744 --rc genhtml_function_coverage=1 00:18:26.744 --rc genhtml_legend=1 00:18:26.744 --rc geninfo_all_blocks=1 00:18:26.744 --rc geninfo_unexecuted_blocks=1 00:18:26.744 00:18:26.744 ' 00:18:26.744 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:26.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.744 --rc genhtml_branch_coverage=1 00:18:26.744 --rc genhtml_function_coverage=1 00:18:26.744 --rc genhtml_legend=1 00:18:26.744 --rc geninfo_all_blocks=1 00:18:26.744 --rc geninfo_unexecuted_blocks=1 00:18:26.744 00:18:26.744 ' 00:18:26.744 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:26.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.744 --rc genhtml_branch_coverage=1 00:18:26.744 --rc genhtml_function_coverage=1 00:18:26.744 --rc genhtml_legend=1 00:18:26.744 --rc geninfo_all_blocks=1 00:18:26.744 --rc geninfo_unexecuted_blocks=1 00:18:26.744 00:18:26.744 ' 00:18:26.744 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:26.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.744 --rc genhtml_branch_coverage=1 00:18:26.744 --rc genhtml_function_coverage=1 00:18:26.744 --rc genhtml_legend=1 00:18:26.744 --rc geninfo_all_blocks=1 00:18:26.744 --rc geninfo_unexecuted_blocks=1 00:18:26.744 00:18:26.744 ' 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:26.744 05:07:55 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=87216 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:26.745 05:07:55 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 87216 00:18:26.745 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87216 ']' 00:18:26.745 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.745 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.745 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.745 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.745 05:07:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:26.745 [2024-11-28 05:07:55.974332] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:18:26.745 [2024-11-28 05:07:55.974720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87216 ] 00:18:27.026 [2024-11-28 05:07:56.124381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:27.026 [2024-11-28 05:07:56.155946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.026 [2024-11-28 05:07:56.156285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.026 [2024-11-28 05:07:56.156222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.600 05:07:56 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.600 05:07:56 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:27.600 05:07:56 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:27.600 05:07:56 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:27.600 05:07:56 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:27.600 05:07:56 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:27.600 05:07:56 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:27.600 05:07:56 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:27.861 05:07:57 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:27.861 05:07:57 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:27.861 05:07:57 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:27.861 05:07:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:27.861 05:07:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:27.861 05:07:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:27.861 05:07:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:27.861 05:07:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:28.122 05:07:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:28.122 { 00:18:28.122 "name": "nvme0n1", 00:18:28.122 "aliases": [ 00:18:28.122 "328e068d-908c-4797-aca6-9bea4ed987d4" 00:18:28.122 ], 00:18:28.122 "product_name": "NVMe disk", 00:18:28.122 "block_size": 4096, 00:18:28.122 "num_blocks": 1310720, 00:18:28.122 "uuid": "328e068d-908c-4797-aca6-9bea4ed987d4", 00:18:28.122 "numa_id": -1, 00:18:28.122 "assigned_rate_limits": { 00:18:28.122 "rw_ios_per_sec": 0, 00:18:28.122 "rw_mbytes_per_sec": 0, 00:18:28.122 "r_mbytes_per_sec": 0, 00:18:28.122 "w_mbytes_per_sec": 0 00:18:28.122 }, 00:18:28.122 "claimed": true, 00:18:28.122 "claim_type": "read_many_write_one", 00:18:28.122 "zoned": false, 00:18:28.122 "supported_io_types": { 00:18:28.122 "read": true, 00:18:28.122 "write": true, 00:18:28.122 "unmap": true, 00:18:28.122 "flush": true, 00:18:28.122 "reset": true, 00:18:28.122 "nvme_admin": true, 00:18:28.122 "nvme_io": true, 00:18:28.122 "nvme_io_md": false, 00:18:28.122 "write_zeroes": true, 00:18:28.122 "zcopy": false, 00:18:28.122 "get_zone_info": false, 00:18:28.122 "zone_management": false, 00:18:28.122 "zone_append": false, 00:18:28.122 "compare": true, 00:18:28.122 "compare_and_write": false, 00:18:28.122 "abort": true, 00:18:28.122 "seek_hole": false, 00:18:28.122 "seek_data": false, 00:18:28.122 "copy": true, 00:18:28.122 "nvme_iov_md": false 00:18:28.122 }, 00:18:28.122 "driver_specific": { 00:18:28.122 "nvme": [ 00:18:28.122 { 00:18:28.122 "pci_address": "0000:00:11.0", 00:18:28.122 "trid": { 00:18:28.122 "trtype": "PCIe", 00:18:28.122 "traddr": "0000:00:11.0" 00:18:28.122 }, 00:18:28.122 "ctrlr_data": { 00:18:28.122 "cntlid": 0, 00:18:28.122 "vendor_id": "0x1b36", 00:18:28.122 "model_number": "QEMU NVMe Ctrl", 00:18:28.122 "serial_number": "12341", 00:18:28.122 "firmware_revision": "8.0.0", 00:18:28.122 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:28.122 "oacs": { 00:18:28.122 "security": 0, 00:18:28.122 "format": 1, 00:18:28.123 "firmware": 0, 00:18:28.123 "ns_manage": 1 00:18:28.123 }, 00:18:28.123 "multi_ctrlr": false, 00:18:28.123 "ana_reporting": false 00:18:28.123 }, 00:18:28.123 "vs": { 00:18:28.123 "nvme_version": "1.4" 00:18:28.123 }, 00:18:28.123 "ns_data": { 00:18:28.123 "id": 1, 00:18:28.123 "can_share": false 00:18:28.123 } 00:18:28.123 } 00:18:28.123 ], 00:18:28.123 "mp_policy": "active_passive" 00:18:28.123 } 00:18:28.123 } 00:18:28.123 ]' 00:18:28.123 05:07:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:28.123 05:07:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:28.123 05:07:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:28.123 05:07:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:28.123 05:07:57 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:28.123 05:07:57 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:18:28.123 05:07:57 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:28.123 05:07:57 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:28.123 05:07:57 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:28.384 05:07:57 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:28.384 05:07:57 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:28.384 05:07:57 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=00a19742-c4c4-4c06-a306-60d2ec3075dc 00:18:28.384 05:07:57 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:28.384 05:07:57 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 00a19742-c4c4-4c06-a306-60d2ec3075dc 00:18:28.645 05:07:57 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:28.906 05:07:58 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=ea56ed91-ad4d-4536-8d77-f3ed03e8865c 00:18:28.906 05:07:58 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ea56ed91-ad4d-4536-8d77-f3ed03e8865c 00:18:29.167 05:07:58 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=e9df41d2-29eb-4aeb-be4c-91dadc3f0044 00:18:29.167 05:07:58 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e9df41d2-29eb-4aeb-be4c-91dadc3f0044 00:18:29.167 05:07:58 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:29.167 05:07:58 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:29.167 05:07:58 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=e9df41d2-29eb-4aeb-be4c-91dadc3f0044 00:18:29.167 05:07:58 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:29.167 05:07:58 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size e9df41d2-29eb-4aeb-be4c-91dadc3f0044 00:18:29.167 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e9df41d2-29eb-4aeb-be4c-91dadc3f0044 00:18:29.167 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:29.167 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:29.167 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:29.167 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9df41d2-29eb-4aeb-be4c-91dadc3f0044 00:18:29.428 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:29.428 { 00:18:29.428 "name": "e9df41d2-29eb-4aeb-be4c-91dadc3f0044", 00:18:29.428 "aliases": [ 00:18:29.428 "lvs/nvme0n1p0" 00:18:29.428 ], 00:18:29.428 "product_name": "Logical Volume", 00:18:29.428 "block_size": 4096, 00:18:29.428 "num_blocks": 26476544, 00:18:29.428 "uuid": "e9df41d2-29eb-4aeb-be4c-91dadc3f0044", 00:18:29.428 "assigned_rate_limits": { 00:18:29.428 "rw_ios_per_sec": 0, 00:18:29.428 "rw_mbytes_per_sec": 0, 00:18:29.428 "r_mbytes_per_sec": 0, 00:18:29.428 "w_mbytes_per_sec": 0 00:18:29.428 }, 00:18:29.428 "claimed": false, 00:18:29.428 "zoned": false, 00:18:29.428 "supported_io_types": { 00:18:29.428 "read": true, 00:18:29.428 "write": true, 00:18:29.428 "unmap": true, 00:18:29.428 "flush": false, 00:18:29.428 "reset": true, 00:18:29.428 "nvme_admin": false, 00:18:29.428 "nvme_io": false, 00:18:29.428 "nvme_io_md": false, 00:18:29.428 "write_zeroes": true, 00:18:29.428 "zcopy": false, 00:18:29.428 "get_zone_info": false, 00:18:29.428 "zone_management": false, 00:18:29.428 "zone_append": false, 00:18:29.428 "compare": false, 00:18:29.428 "compare_and_write": false, 00:18:29.428 "abort": false, 00:18:29.428 "seek_hole": true, 00:18:29.428 "seek_data": true, 00:18:29.428 "copy": false, 00:18:29.428 "nvme_iov_md": false 00:18:29.428 }, 00:18:29.429 "driver_specific": { 00:18:29.429 "lvol": { 00:18:29.429 "lvol_store_uuid": "ea56ed91-ad4d-4536-8d77-f3ed03e8865c", 00:18:29.429 "base_bdev": "nvme0n1", 00:18:29.429 "thin_provision": true, 00:18:29.429 "num_allocated_clusters": 0, 00:18:29.429 "snapshot": false, 00:18:29.429 "clone": false, 00:18:29.429 "esnap_clone": false 00:18:29.429 } 00:18:29.429 } 00:18:29.429 } 00:18:29.429 ]' 00:18:29.429 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:29.429 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:29.429 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:29.429 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:29.429 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:29.429 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:29.429 05:07:58 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:29.429 05:07:58 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:29.429 05:07:58 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:29.689 05:07:58 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:29.689 05:07:58 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:29.689 05:07:58 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size e9df41d2-29eb-4aeb-be4c-91dadc3f0044 00:18:29.689 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e9df41d2-29eb-4aeb-be4c-91dadc3f0044 00:18:29.689 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:29.690 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:29.690 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:29.690 05:07:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9df41d2-29eb-4aeb-be4c-91dadc3f0044 00:18:29.948 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:29.948 { 00:18:29.948 "name": "e9df41d2-29eb-4aeb-be4c-91dadc3f0044", 00:18:29.948 "aliases": [ 00:18:29.948 "lvs/nvme0n1p0" 00:18:29.948 ], 00:18:29.948 "product_name": "Logical Volume", 00:18:29.948 "block_size": 4096, 00:18:29.948 "num_blocks": 26476544, 00:18:29.948 "uuid": "e9df41d2-29eb-4aeb-be4c-91dadc3f0044", 00:18:29.948 "assigned_rate_limits": { 00:18:29.948 "rw_ios_per_sec": 0, 00:18:29.949 "rw_mbytes_per_sec": 0, 00:18:29.949 "r_mbytes_per_sec": 0, 00:18:29.949 "w_mbytes_per_sec": 0 00:18:29.949 }, 00:18:29.949 "claimed": false, 00:18:29.949 "zoned": false, 00:18:29.949 "supported_io_types": { 00:18:29.949 "read": true, 00:18:29.949 "write": true, 00:18:29.949 "unmap": true, 00:18:29.949 "flush": false, 00:18:29.949 "reset": true, 00:18:29.949 "nvme_admin": false, 00:18:29.949 "nvme_io": false, 00:18:29.949 "nvme_io_md": false, 00:18:29.949 "write_zeroes": true, 00:18:29.949 "zcopy": false, 00:18:29.949 "get_zone_info": false, 00:18:29.949 "zone_management": false, 00:18:29.949 "zone_append": false, 00:18:29.949 "compare": false, 00:18:29.949 "compare_and_write": false, 00:18:29.949 "abort": false, 00:18:29.949 "seek_hole": true, 00:18:29.949 "seek_data": true, 00:18:29.949 "copy": false, 00:18:29.949 "nvme_iov_md": false 00:18:29.949 }, 00:18:29.949 "driver_specific": { 00:18:29.949 "lvol": { 00:18:29.949 "lvol_store_uuid": "ea56ed91-ad4d-4536-8d77-f3ed03e8865c", 00:18:29.949 "base_bdev": "nvme0n1", 00:18:29.949 "thin_provision": true, 00:18:29.949 "num_allocated_clusters": 0, 00:18:29.949 "snapshot": false, 00:18:29.949 "clone": false, 00:18:29.949 "esnap_clone": false 00:18:29.949 } 00:18:29.949 } 00:18:29.949 } 00:18:29.949 ]' 00:18:29.949 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:29.949 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:29.949 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:29.949 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:29.949 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:29.949 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:29.949 05:07:59 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:29.949 05:07:59 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:30.207 05:07:59 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:30.207 05:07:59 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:30.207 05:07:59 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size e9df41d2-29eb-4aeb-be4c-91dadc3f0044 00:18:30.207 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=e9df41d2-29eb-4aeb-be4c-91dadc3f0044 00:18:30.207 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:30.207 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:18:30.207 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:18:30.207 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9df41d2-29eb-4aeb-be4c-91dadc3f0044 00:18:30.207 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:30.207 { 00:18:30.207 "name": "e9df41d2-29eb-4aeb-be4c-91dadc3f0044", 00:18:30.207 "aliases": [ 00:18:30.207 "lvs/nvme0n1p0" 00:18:30.207 ], 00:18:30.207 "product_name": "Logical Volume", 00:18:30.207 "block_size": 4096, 00:18:30.207 "num_blocks": 26476544, 00:18:30.207 "uuid": "e9df41d2-29eb-4aeb-be4c-91dadc3f0044", 00:18:30.207 "assigned_rate_limits": { 00:18:30.207 "rw_ios_per_sec": 0, 00:18:30.207 "rw_mbytes_per_sec": 0, 00:18:30.207 "r_mbytes_per_sec": 0, 00:18:30.207 "w_mbytes_per_sec": 0 00:18:30.207 }, 00:18:30.207 "claimed": false, 00:18:30.207 "zoned": false, 00:18:30.207 "supported_io_types": { 00:18:30.207 "read": true, 00:18:30.207 "write": true, 00:18:30.207 "unmap": true, 00:18:30.207 "flush": false, 00:18:30.207 "reset": true, 00:18:30.207 "nvme_admin": false, 00:18:30.207 "nvme_io": false, 00:18:30.207 "nvme_io_md": false, 00:18:30.207 "write_zeroes": true, 00:18:30.207 "zcopy": false, 00:18:30.207 "get_zone_info": false, 00:18:30.207 "zone_management": false, 00:18:30.207 "zone_append": false, 00:18:30.207 "compare": false, 00:18:30.207 "compare_and_write": false, 00:18:30.207 "abort": false, 00:18:30.207 "seek_hole": true, 00:18:30.207 "seek_data": true, 00:18:30.207 "copy": false, 00:18:30.207 "nvme_iov_md": false 00:18:30.207 }, 00:18:30.207 "driver_specific": { 00:18:30.207 "lvol": { 00:18:30.207 "lvol_store_uuid": "ea56ed91-ad4d-4536-8d77-f3ed03e8865c", 00:18:30.207 "base_bdev": "nvme0n1", 00:18:30.207 "thin_provision": true, 00:18:30.207 "num_allocated_clusters": 0, 00:18:30.207 "snapshot": false, 00:18:30.207 "clone": false, 00:18:30.207 "esnap_clone": false 00:18:30.207 } 00:18:30.207 } 00:18:30.207 } 00:18:30.207 ]' 00:18:30.207 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:30.466 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:18:30.466 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:30.466 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:30.466 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:30.466 05:07:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:18:30.466 05:07:59 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:30.466 05:07:59 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e9df41d2-29eb-4aeb-be4c-91dadc3f0044 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:30.466 [2024-11-28 05:07:59.720744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.466 [2024-11-28 05:07:59.720786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:30.466 [2024-11-28 05:07:59.720797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:30.466 [2024-11-28 05:07:59.720807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.466 [2024-11-28 05:07:59.722802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.466 [2024-11-28 05:07:59.722838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:30.466 [2024-11-28 05:07:59.722847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.968 ms 00:18:30.466 [2024-11-28 05:07:59.722855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.466 [2024-11-28 05:07:59.722992] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:30.466 [2024-11-28 05:07:59.723225] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:30.466 [2024-11-28 05:07:59.723250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.466 [2024-11-28 05:07:59.723258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:30.466 [2024-11-28 05:07:59.723269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:18:30.466 [2024-11-28 05:07:59.723276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.466 [2024-11-28 05:07:59.723596] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d2392ca9-dc0a-4073-94e9-23a6ae314a67 00:18:30.466 [2024-11-28 05:07:59.724658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.466 [2024-11-28 05:07:59.724690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:30.466 [2024-11-28 05:07:59.724700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:30.466 [2024-11-28 05:07:59.724708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.466 [2024-11-28 05:07:59.729825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.466 [2024-11-28 05:07:59.729852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:30.466 [2024-11-28 05:07:59.729861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.042 ms 00:18:30.466 [2024-11-28 05:07:59.729867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.466 [2024-11-28 05:07:59.729978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.466 [2024-11-28 05:07:59.729995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:30.466 [2024-11-28 05:07:59.730005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:30.466 [2024-11-28 05:07:59.730012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.466 [2024-11-28 05:07:59.730053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.466 [2024-11-28 05:07:59.730063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:30.466 [2024-11-28 05:07:59.730076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:30.466 [2024-11-28 05:07:59.730086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.466 [2024-11-28 05:07:59.730126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:30.466 [2024-11-28 05:07:59.731410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.466 [2024-11-28 05:07:59.731437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:30.466 [2024-11-28 05:07:59.731447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.293 ms 00:18:30.466 [2024-11-28 05:07:59.731455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.466 [2024-11-28 05:07:59.731493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.466 [2024-11-28 05:07:59.731502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:30.466 [2024-11-28 05:07:59.731513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:30.466 [2024-11-28 05:07:59.731536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.466 [2024-11-28 05:07:59.731562] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:30.466 [2024-11-28 05:07:59.731684] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:30.466 [2024-11-28 05:07:59.731701] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:30.466 [2024-11-28 05:07:59.731720] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:30.466 [2024-11-28 05:07:59.731733] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:30.466 [2024-11-28 05:07:59.731747] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:30.466 [2024-11-28 05:07:59.731754] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:30.466 [2024-11-28 05:07:59.731763] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:30.466 [2024-11-28 05:07:59.731772] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:30.466 [2024-11-28 05:07:59.731787] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:30.467 [2024-11-28 05:07:59.731793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.467 [2024-11-28 05:07:59.731800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:30.467 [2024-11-28 05:07:59.731807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:18:30.467 [2024-11-28 05:07:59.731814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.467 [2024-11-28 05:07:59.731888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.467 [2024-11-28 05:07:59.731912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:30.467 [2024-11-28 05:07:59.731919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:30.467 [2024-11-28 05:07:59.731926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.467 [2024-11-28 05:07:59.732043] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:30.467 [2024-11-28 05:07:59.732055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:30.467 [2024-11-28 05:07:59.732061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:30.467 [2024-11-28 05:07:59.732069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:30.467 [2024-11-28 05:07:59.732093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:30.467 [2024-11-28 05:07:59.732105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:30.467 [2024-11-28 05:07:59.732111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:30.467 [2024-11-28 05:07:59.732125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:30.467 [2024-11-28 05:07:59.732132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:30.467 [2024-11-28 05:07:59.732137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:30.467 [2024-11-28 05:07:59.732145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:30.467 [2024-11-28 05:07:59.732151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:30.467 [2024-11-28 05:07:59.732157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:30.467 [2024-11-28 05:07:59.732171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:30.467 [2024-11-28 05:07:59.732191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:30.467 [2024-11-28 05:07:59.732205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.467 [2024-11-28 05:07:59.732237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:30.467 [2024-11-28 05:07:59.732249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.467 [2024-11-28 05:07:59.732263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:30.467 [2024-11-28 05:07:59.732270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.467 [2024-11-28 05:07:59.732284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:30.467 [2024-11-28 05:07:59.732293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.467 [2024-11-28 05:07:59.732305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:30.467 [2024-11-28 05:07:59.732310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:30.467 [2024-11-28 05:07:59.732324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:30.467 [2024-11-28 05:07:59.732330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:30.467 [2024-11-28 05:07:59.732336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:30.467 [2024-11-28 05:07:59.732342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:30.467 [2024-11-28 05:07:59.732347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:30.467 [2024-11-28 05:07:59.732354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:30.467 [2024-11-28 05:07:59.732375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:30.467 [2024-11-28 05:07:59.732384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732395] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:30.467 [2024-11-28 05:07:59.732402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:30.467 [2024-11-28 05:07:59.732410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:30.467 [2024-11-28 05:07:59.732417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.467 [2024-11-28 05:07:59.732429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:30.467 [2024-11-28 05:07:59.732437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:30.467 [2024-11-28 05:07:59.732444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:30.467 [2024-11-28 05:07:59.732449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:30.467 [2024-11-28 05:07:59.732456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:30.467 [2024-11-28 05:07:59.732461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:30.467 [2024-11-28 05:07:59.732470] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:30.467 [2024-11-28 05:07:59.732477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:30.467 [2024-11-28 05:07:59.732488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:30.467 [2024-11-28 05:07:59.732498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:30.467 [2024-11-28 05:07:59.732505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:30.467 [2024-11-28 05:07:59.732510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:30.467 [2024-11-28 05:07:59.732519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:30.467 [2024-11-28 05:07:59.732525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:30.467 [2024-11-28 05:07:59.732533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:30.467 [2024-11-28 05:07:59.732539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:30.467 [2024-11-28 05:07:59.732546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:30.467 [2024-11-28 05:07:59.732552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:30.467 [2024-11-28 05:07:59.732558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:30.467 [2024-11-28 05:07:59.732563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:30.467 [2024-11-28 05:07:59.732571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:30.467 [2024-11-28 05:07:59.732578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:30.467 [2024-11-28 05:07:59.732589] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:30.467 [2024-11-28 05:07:59.732602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:30.467 [2024-11-28 05:07:59.732614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:30.467 [2024-11-28 05:07:59.732621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:30.467 [2024-11-28 05:07:59.732628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:30.467 [2024-11-28 05:07:59.732634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:30.467 [2024-11-28 05:07:59.732641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.467 [2024-11-28 05:07:59.732647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:30.467 [2024-11-28 05:07:59.732655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:18:30.467 [2024-11-28 05:07:59.732661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.467 [2024-11-28 05:07:59.732746] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:30.467 [2024-11-28 05:07:59.732760] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:33.018 [2024-11-28 05:08:02.178889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.018 [2024-11-28 05:08:02.178948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:33.018 [2024-11-28 05:08:02.178979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2446.129 ms 00:18:33.018 [2024-11-28 05:08:02.178987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.018 [2024-11-28 05:08:02.187463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.018 [2024-11-28 05:08:02.187505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.018 [2024-11-28 05:08:02.187519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.379 ms 00:18:33.018 [2024-11-28 05:08:02.187539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.018 [2024-11-28 05:08:02.187705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.018 [2024-11-28 05:08:02.187720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:33.018 [2024-11-28 05:08:02.187735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:33.018 [2024-11-28 05:08:02.187742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.018 [2024-11-28 05:08:02.204164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.018 [2024-11-28 05:08:02.204233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.018 [2024-11-28 05:08:02.204253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.381 ms 00:18:33.018 [2024-11-28 05:08:02.204264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.018 [2024-11-28 05:08:02.204378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.018 [2024-11-28 05:08:02.204406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.018 [2024-11-28 05:08:02.204426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:33.018 [2024-11-28 05:08:02.204444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.018 [2024-11-28 05:08:02.204854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.018 [2024-11-28 05:08:02.204886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.018 [2024-11-28 05:08:02.204904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:18:33.018 [2024-11-28 05:08:02.204916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.018 [2024-11-28 05:08:02.205127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.018 [2024-11-28 05:08:02.205153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.018 [2024-11-28 05:08:02.205201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:18:33.018 [2024-11-28 05:08:02.205213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.018 [2024-11-28 05:08:02.211659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.018 [2024-11-28 05:08:02.211699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.018 [2024-11-28 05:08:02.211714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.401 ms 00:18:33.018 [2024-11-28 05:08:02.211725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.018 [2024-11-28 05:08:02.220421] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:33.018 [2024-11-28 05:08:02.234688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.018 [2024-11-28 05:08:02.234722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:33.018 [2024-11-28 05:08:02.234733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.828 ms 00:18:33.018 [2024-11-28 05:08:02.234743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.018 [2024-11-28 05:08:02.290557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.018 [2024-11-28 05:08:02.290600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:33.018 [2024-11-28 05:08:02.290611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.747 ms 00:18:33.018 [2024-11-28 05:08:02.290626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.018 [2024-11-28 05:08:02.290814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.019 [2024-11-28 05:08:02.290827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:33.019 [2024-11-28 05:08:02.290835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:18:33.019 [2024-11-28 05:08:02.290845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.019 [2024-11-28 05:08:02.293813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.019 [2024-11-28 05:08:02.293848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:33.019 [2024-11-28 05:08:02.293858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.939 ms 00:18:33.019 [2024-11-28 05:08:02.293868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.019 [2024-11-28 05:08:02.296266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.019 [2024-11-28 05:08:02.296301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:33.019 [2024-11-28 05:08:02.296312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.348 ms 00:18:33.019 [2024-11-28 05:08:02.296322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.019 [2024-11-28 05:08:02.296662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.019 [2024-11-28 05:08:02.296688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:33.019 [2024-11-28 05:08:02.296697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:33.019 [2024-11-28 05:08:02.296708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.277 [2024-11-28 05:08:02.326416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.277 [2024-11-28 05:08:02.326533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:33.277 [2024-11-28 05:08:02.326569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.666 ms 00:18:33.277 [2024-11-28 05:08:02.326604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.277 [2024-11-28 05:08:02.334455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.277 [2024-11-28 05:08:02.334495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:33.277 [2024-11-28 05:08:02.334516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.586 ms 00:18:33.277 [2024-11-28 05:08:02.334526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.277 [2024-11-28 05:08:02.337807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.277 [2024-11-28 05:08:02.337843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:33.277 [2024-11-28 05:08:02.337851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:18:33.277 [2024-11-28 05:08:02.337860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.277 [2024-11-28 05:08:02.341358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.277 [2024-11-28 05:08:02.341394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:33.277 [2024-11-28 05:08:02.341403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.453 ms 00:18:33.277 [2024-11-28 05:08:02.341414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.277 [2024-11-28 05:08:02.341463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.277 [2024-11-28 05:08:02.341475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:33.277 [2024-11-28 05:08:02.341483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:33.277 [2024-11-28 05:08:02.341492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.277 [2024-11-28 05:08:02.341566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.277 [2024-11-28 05:08:02.341581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:33.277 [2024-11-28 05:08:02.341589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:33.277 [2024-11-28 05:08:02.341598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.277 [2024-11-28 05:08:02.342469] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:33.277 [2024-11-28 05:08:02.343442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2621.403 ms, result 0 00:18:33.277 [2024-11-28 05:08:02.344405] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:33.277 { 00:18:33.277 "name": "ftl0", 00:18:33.277 "uuid": "d2392ca9-dc0a-4073-94e9-23a6ae314a67" 00:18:33.277 } 00:18:33.277 05:08:02 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:33.277 05:08:02 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:33.277 05:08:02 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:33.277 05:08:02 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:18:33.277 05:08:02 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:33.277 05:08:02 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:33.277 05:08:02 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:33.535 05:08:02 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:33.535 [ 00:18:33.535 { 00:18:33.535 "name": "ftl0", 00:18:33.535 "aliases": [ 00:18:33.535 "d2392ca9-dc0a-4073-94e9-23a6ae314a67" 00:18:33.535 ], 00:18:33.535 "product_name": "FTL disk", 00:18:33.535 "block_size": 4096, 00:18:33.535 "num_blocks": 23592960, 00:18:33.535 "uuid": "d2392ca9-dc0a-4073-94e9-23a6ae314a67", 00:18:33.535 "assigned_rate_limits": { 00:18:33.535 "rw_ios_per_sec": 0, 00:18:33.535 "rw_mbytes_per_sec": 0, 00:18:33.535 "r_mbytes_per_sec": 0, 00:18:33.535 "w_mbytes_per_sec": 0 00:18:33.535 }, 00:18:33.535 "claimed": false, 00:18:33.535 "zoned": false, 00:18:33.535 "supported_io_types": { 00:18:33.535 "read": true, 00:18:33.535 "write": true, 00:18:33.535 "unmap": true, 00:18:33.535 "flush": true, 00:18:33.535 "reset": false, 00:18:33.535 "nvme_admin": false, 00:18:33.535 "nvme_io": false, 00:18:33.535 "nvme_io_md": false, 00:18:33.535 "write_zeroes": true, 00:18:33.535 "zcopy": false, 00:18:33.535 "get_zone_info": false, 00:18:33.535 "zone_management": false, 00:18:33.535 "zone_append": false, 00:18:33.535 "compare": false, 00:18:33.535 "compare_and_write": false, 00:18:33.535 "abort": false, 00:18:33.535 "seek_hole": false, 00:18:33.535 "seek_data": false, 00:18:33.535 "copy": false, 00:18:33.535 "nvme_iov_md": false 00:18:33.535 }, 00:18:33.535 "driver_specific": { 00:18:33.535 "ftl": { 00:18:33.535 "base_bdev": "e9df41d2-29eb-4aeb-be4c-91dadc3f0044", 00:18:33.535 "cache": "nvc0n1p0" 00:18:33.535 } 00:18:33.535 } 00:18:33.535 } 00:18:33.535 ] 00:18:33.535 05:08:02 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:18:33.535 05:08:02 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:33.535 05:08:02 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:33.793 05:08:02 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:33.793 05:08:02 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:34.051 05:08:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:34.051 { 00:18:34.051 "name": "ftl0", 00:18:34.051 "aliases": [ 00:18:34.051 "d2392ca9-dc0a-4073-94e9-23a6ae314a67" 00:18:34.051 ], 00:18:34.051 "product_name": "FTL disk", 00:18:34.051 "block_size": 4096, 00:18:34.051 "num_blocks": 23592960, 00:18:34.051 "uuid": "d2392ca9-dc0a-4073-94e9-23a6ae314a67", 00:18:34.051 "assigned_rate_limits": { 00:18:34.051 "rw_ios_per_sec": 0, 00:18:34.051 "rw_mbytes_per_sec": 0, 00:18:34.051 "r_mbytes_per_sec": 0, 00:18:34.051 "w_mbytes_per_sec": 0 00:18:34.051 }, 00:18:34.051 "claimed": false, 00:18:34.051 "zoned": false, 00:18:34.051 "supported_io_types": { 00:18:34.051 "read": true, 00:18:34.051 "write": true, 00:18:34.051 "unmap": true, 00:18:34.051 "flush": true, 00:18:34.051 "reset": false, 00:18:34.051 "nvme_admin": false, 00:18:34.051 "nvme_io": false, 00:18:34.051 "nvme_io_md": false, 00:18:34.051 "write_zeroes": true, 00:18:34.051 "zcopy": false, 00:18:34.051 "get_zone_info": false, 00:18:34.051 "zone_management": false, 00:18:34.051 "zone_append": false, 00:18:34.051 "compare": false, 00:18:34.051 "compare_and_write": false, 00:18:34.051 "abort": false, 00:18:34.051 "seek_hole": false, 00:18:34.051 "seek_data": false, 00:18:34.051 "copy": false, 00:18:34.051 "nvme_iov_md": false 00:18:34.051 }, 00:18:34.051 "driver_specific": { 00:18:34.051 "ftl": { 00:18:34.051 "base_bdev": "e9df41d2-29eb-4aeb-be4c-91dadc3f0044", 00:18:34.051 "cache": "nvc0n1p0" 00:18:34.051 } 00:18:34.051 } 00:18:34.051 } 00:18:34.051 ]' 00:18:34.051 05:08:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:34.051 05:08:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:34.051 05:08:03 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:34.312 [2024-11-28 05:08:03.367739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.312 [2024-11-28 05:08:03.367780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:34.312 [2024-11-28 05:08:03.367792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:34.312 [2024-11-28 05:08:03.367798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.312 [2024-11-28 05:08:03.367827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:34.312 [2024-11-28 05:08:03.368258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.312 [2024-11-28 05:08:03.368282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:34.312 [2024-11-28 05:08:03.368302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:18:34.312 [2024-11-28 05:08:03.368312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.312 [2024-11-28 05:08:03.368845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.312 [2024-11-28 05:08:03.368861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:34.312 [2024-11-28 05:08:03.368869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:18:34.312 [2024-11-28 05:08:03.368878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.312 [2024-11-28 05:08:03.371596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.312 [2024-11-28 05:08:03.371618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:34.312 [2024-11-28 05:08:03.371625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.690 ms 00:18:34.312 [2024-11-28 05:08:03.371633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.312 [2024-11-28 05:08:03.376799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.312 [2024-11-28 05:08:03.376829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:34.312 [2024-11-28 05:08:03.376846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.124 ms 00:18:34.312 [2024-11-28 05:08:03.376856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.312 [2024-11-28 05:08:03.378339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.312 [2024-11-28 05:08:03.378373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:34.312 [2024-11-28 05:08:03.378380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:18:34.312 [2024-11-28 05:08:03.378387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.312 [2024-11-28 05:08:03.382325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.312 [2024-11-28 05:08:03.382359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:34.312 [2024-11-28 05:08:03.382367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.888 ms 00:18:34.312 [2024-11-28 05:08:03.382380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.312 [2024-11-28 05:08:03.382546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.312 [2024-11-28 05:08:03.382561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:34.312 [2024-11-28 05:08:03.382577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:18:34.312 [2024-11-28 05:08:03.382593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.312 [2024-11-28 05:08:03.384226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.312 [2024-11-28 05:08:03.384258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:34.312 [2024-11-28 05:08:03.384265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:18:34.312 [2024-11-28 05:08:03.384275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.312 [2024-11-28 05:08:03.385555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.312 [2024-11-28 05:08:03.385584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:34.312 [2024-11-28 05:08:03.385592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:18:34.312 [2024-11-28 05:08:03.385599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.312 [2024-11-28 05:08:03.386752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.312 [2024-11-28 05:08:03.386785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:34.312 [2024-11-28 05:08:03.386792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:18:34.313 [2024-11-28 05:08:03.386799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-11-28 05:08:03.388003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.313 [2024-11-28 05:08:03.388032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:34.313 [2024-11-28 05:08:03.388040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:18:34.313 [2024-11-28 05:08:03.388047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.313 [2024-11-28 05:08:03.388086] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:34.313 [2024-11-28 05:08:03.388100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:34.313 [2024-11-28 05:08:03.388555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:34.314 [2024-11-28 05:08:03.388814] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:34.314 [2024-11-28 05:08:03.388820] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d2392ca9-dc0a-4073-94e9-23a6ae314a67 00:18:34.314 [2024-11-28 05:08:03.388828] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:34.314 [2024-11-28 05:08:03.388836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:34.314 [2024-11-28 05:08:03.388843] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:34.314 [2024-11-28 05:08:03.388849] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:34.314 [2024-11-28 05:08:03.388856] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:34.314 [2024-11-28 05:08:03.388861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:34.314 [2024-11-28 05:08:03.388869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:34.314 [2024-11-28 05:08:03.388873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:34.314 [2024-11-28 05:08:03.388880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:34.314 [2024-11-28 05:08:03.388886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.314 [2024-11-28 05:08:03.388893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:34.314 [2024-11-28 05:08:03.388900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:18:34.314 [2024-11-28 05:08:03.388909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.314 [2024-11-28 05:08:03.390381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.314 [2024-11-28 05:08:03.390402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:34.314 [2024-11-28 05:08:03.390410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:18:34.314 [2024-11-28 05:08:03.390417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.314 [2024-11-28 05:08:03.390514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.314 [2024-11-28 05:08:03.390528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:34.314 [2024-11-28 05:08:03.390535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:34.314 [2024-11-28 05:08:03.390543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.314 [2024-11-28 05:08:03.395385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.314 [2024-11-28 05:08:03.395415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:34.314 [2024-11-28 05:08:03.395425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.314 [2024-11-28 05:08:03.395433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.314 [2024-11-28 05:08:03.395504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.314 [2024-11-28 05:08:03.395514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:34.314 [2024-11-28 05:08:03.395520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.314 [2024-11-28 05:08:03.395529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.314 [2024-11-28 05:08:03.395583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.314 [2024-11-28 05:08:03.395591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:34.314 [2024-11-28 05:08:03.395598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.314 [2024-11-28 05:08:03.395605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.314 [2024-11-28 05:08:03.395629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.314 [2024-11-28 05:08:03.395637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:34.314 [2024-11-28 05:08:03.395643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.314 [2024-11-28 05:08:03.395650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.314 [2024-11-28 05:08:03.404409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.314 [2024-11-28 05:08:03.404446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:34.314 [2024-11-28 05:08:03.404454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.315 [2024-11-28 05:08:03.404461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.315 [2024-11-28 05:08:03.411557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.315 [2024-11-28 05:08:03.411592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:34.315 [2024-11-28 05:08:03.411600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.315 [2024-11-28 05:08:03.411610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.315 [2024-11-28 05:08:03.411658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.315 [2024-11-28 05:08:03.411669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.315 [2024-11-28 05:08:03.411676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.315 [2024-11-28 05:08:03.411683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.315 [2024-11-28 05:08:03.411742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.315 [2024-11-28 05:08:03.411755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.315 [2024-11-28 05:08:03.411762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.315 [2024-11-28 05:08:03.411769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.315 [2024-11-28 05:08:03.411840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.315 [2024-11-28 05:08:03.411854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.315 [2024-11-28 05:08:03.411862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.315 [2024-11-28 05:08:03.411869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.315 [2024-11-28 05:08:03.411914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.315 [2024-11-28 05:08:03.411933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:34.315 [2024-11-28 05:08:03.411939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.315 [2024-11-28 05:08:03.411948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.315 [2024-11-28 05:08:03.411992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.315 [2024-11-28 05:08:03.412001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.315 [2024-11-28 05:08:03.412009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.315 [2024-11-28 05:08:03.412026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.315 [2024-11-28 05:08:03.412071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.315 [2024-11-28 05:08:03.412081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.315 [2024-11-28 05:08:03.412096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.315 [2024-11-28 05:08:03.412103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.315 [2024-11-28 05:08:03.412328] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.575 ms, result 0 00:18:34.315 true 00:18:34.315 05:08:03 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 87216 00:18:34.315 05:08:03 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87216 ']' 00:18:34.315 05:08:03 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87216 00:18:34.315 05:08:03 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:34.315 05:08:03 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.315 05:08:03 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87216 00:18:34.315 05:08:03 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.315 05:08:03 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.315 killing process with pid 87216 00:18:34.315 05:08:03 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87216' 00:18:34.315 05:08:03 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87216 00:18:34.315 05:08:03 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87216 00:18:39.582 05:08:08 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:40.527 65536+0 records in 00:18:40.527 65536+0 records out 00:18:40.527 268435456 bytes (268 MB, 256 MiB) copied, 1.08743 s, 247 MB/s 00:18:40.527 05:08:09 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:40.527 [2024-11-28 05:08:09.594558] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:18:40.527 [2024-11-28 05:08:09.594661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87382 ] 00:18:40.527 [2024-11-28 05:08:09.739986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.527 [2024-11-28 05:08:09.760023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.790 [2024-11-28 05:08:09.859046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.790 [2024-11-28 05:08:09.859139] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.790 [2024-11-28 05:08:10.020220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.790 [2024-11-28 05:08:10.020284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.790 [2024-11-28 05:08:10.020300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:40.790 [2024-11-28 05:08:10.020308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.790 [2024-11-28 05:08:10.022876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.790 [2024-11-28 05:08:10.022934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.790 [2024-11-28 05:08:10.022945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:18:40.790 [2024-11-28 05:08:10.022953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.790 [2024-11-28 05:08:10.023069] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.790 [2024-11-28 05:08:10.023350] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.790 [2024-11-28 05:08:10.023377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.790 [2024-11-28 05:08:10.023386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.790 [2024-11-28 05:08:10.023396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:18:40.790 [2024-11-28 05:08:10.023403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.790 [2024-11-28 05:08:10.025078] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:40.790 [2024-11-28 05:08:10.028838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.790 [2024-11-28 05:08:10.028896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:40.790 [2024-11-28 05:08:10.028913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.761 ms 00:18:40.790 [2024-11-28 05:08:10.028921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.790 [2024-11-28 05:08:10.029003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.790 [2024-11-28 05:08:10.029014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:40.790 [2024-11-28 05:08:10.029027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:40.790 [2024-11-28 05:08:10.029035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.790 [2024-11-28 05:08:10.037080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.790 [2024-11-28 05:08:10.037126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.790 [2024-11-28 05:08:10.037136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.980 ms 00:18:40.790 [2024-11-28 05:08:10.037144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.790 [2024-11-28 05:08:10.037314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.790 [2024-11-28 05:08:10.037327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.790 [2024-11-28 05:08:10.037336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:40.790 [2024-11-28 05:08:10.037351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.790 [2024-11-28 05:08:10.037378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.790 [2024-11-28 05:08:10.037392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.790 [2024-11-28 05:08:10.037400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:40.790 [2024-11-28 05:08:10.037407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.790 [2024-11-28 05:08:10.037428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:40.790 [2024-11-28 05:08:10.039488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.790 [2024-11-28 05:08:10.039535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.790 [2024-11-28 05:08:10.039546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.066 ms 00:18:40.790 [2024-11-28 05:08:10.039560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.790 [2024-11-28 05:08:10.039606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.790 [2024-11-28 05:08:10.039615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.790 [2024-11-28 05:08:10.039624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:40.790 [2024-11-28 05:08:10.039631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.790 [2024-11-28 05:08:10.039650] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:40.790 [2024-11-28 05:08:10.039670] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:40.790 [2024-11-28 05:08:10.039713] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:40.790 [2024-11-28 05:08:10.039733] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:40.790 [2024-11-28 05:08:10.039839] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:40.790 [2024-11-28 05:08:10.039850] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.790 [2024-11-28 05:08:10.039862] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:40.790 [2024-11-28 05:08:10.039872] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.790 [2024-11-28 05:08:10.039882] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.790 [2024-11-28 05:08:10.039889] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:40.790 [2024-11-28 05:08:10.039897] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.790 [2024-11-28 05:08:10.039905] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:40.790 [2024-11-28 05:08:10.039915] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:40.790 [2024-11-28 05:08:10.039925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.790 [2024-11-28 05:08:10.039932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.790 [2024-11-28 05:08:10.039944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:18:40.790 [2024-11-28 05:08:10.039952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.790 [2024-11-28 05:08:10.040040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.790 [2024-11-28 05:08:10.040049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.790 [2024-11-28 05:08:10.040060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:40.790 [2024-11-28 05:08:10.040068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.790 [2024-11-28 05:08:10.040167] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.790 [2024-11-28 05:08:10.040204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.790 [2024-11-28 05:08:10.040218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.790 [2024-11-28 05:08:10.040227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.790 [2024-11-28 05:08:10.040237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.790 [2024-11-28 05:08:10.040245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.790 [2024-11-28 05:08:10.040254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:40.790 [2024-11-28 05:08:10.040264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.790 [2024-11-28 05:08:10.040272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:40.790 [2024-11-28 05:08:10.040281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.790 [2024-11-28 05:08:10.040289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.790 [2024-11-28 05:08:10.040297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:40.790 [2024-11-28 05:08:10.040305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.790 [2024-11-28 05:08:10.040313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.791 [2024-11-28 05:08:10.040322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:40.791 [2024-11-28 05:08:10.040329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.791 [2024-11-28 05:08:10.040338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.791 [2024-11-28 05:08:10.040350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:40.791 [2024-11-28 05:08:10.040358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.791 [2024-11-28 05:08:10.040366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.791 [2024-11-28 05:08:10.040375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:40.791 [2024-11-28 05:08:10.040383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.791 [2024-11-28 05:08:10.040391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.791 [2024-11-28 05:08:10.040406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:40.791 [2024-11-28 05:08:10.040414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.791 [2024-11-28 05:08:10.040422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.791 [2024-11-28 05:08:10.040430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:40.791 [2024-11-28 05:08:10.040438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.791 [2024-11-28 05:08:10.040446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.791 [2024-11-28 05:08:10.040454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:40.791 [2024-11-28 05:08:10.040462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.791 [2024-11-28 05:08:10.040470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.791 [2024-11-28 05:08:10.040478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:40.791 [2024-11-28 05:08:10.040485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.791 [2024-11-28 05:08:10.040493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.791 [2024-11-28 05:08:10.040501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:40.791 [2024-11-28 05:08:10.040509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.791 [2024-11-28 05:08:10.040517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:40.791 [2024-11-28 05:08:10.040524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:40.791 [2024-11-28 05:08:10.040533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.791 [2024-11-28 05:08:10.040543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:40.791 [2024-11-28 05:08:10.040551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:40.791 [2024-11-28 05:08:10.040559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.791 [2024-11-28 05:08:10.040566] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.791 [2024-11-28 05:08:10.040575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.791 [2024-11-28 05:08:10.040583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.791 [2024-11-28 05:08:10.040592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.791 [2024-11-28 05:08:10.040601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.791 [2024-11-28 05:08:10.040609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.791 [2024-11-28 05:08:10.040618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.791 [2024-11-28 05:08:10.040627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.791 [2024-11-28 05:08:10.040634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.791 [2024-11-28 05:08:10.040643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.791 [2024-11-28 05:08:10.040653] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.791 [2024-11-28 05:08:10.040664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.791 [2024-11-28 05:08:10.040679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:40.791 [2024-11-28 05:08:10.040687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:40.791 [2024-11-28 05:08:10.040695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:40.791 [2024-11-28 05:08:10.040704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:40.791 [2024-11-28 05:08:10.040713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:40.791 [2024-11-28 05:08:10.040721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:40.791 [2024-11-28 05:08:10.040729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:40.791 [2024-11-28 05:08:10.040743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:40.791 [2024-11-28 05:08:10.040751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:40.791 [2024-11-28 05:08:10.040758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:40.791 [2024-11-28 05:08:10.040766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:40.791 [2024-11-28 05:08:10.040774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:40.791 [2024-11-28 05:08:10.040781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:40.791 [2024-11-28 05:08:10.040790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:40.791 [2024-11-28 05:08:10.040798] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.791 [2024-11-28 05:08:10.040809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.791 [2024-11-28 05:08:10.040819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.791 [2024-11-28 05:08:10.040827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.791 [2024-11-28 05:08:10.040835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.791 [2024-11-28 05:08:10.040842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.791 [2024-11-28 05:08:10.040850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.791 [2024-11-28 05:08:10.040858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.791 [2024-11-28 05:08:10.040866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:18:40.791 [2024-11-28 05:08:10.040873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.791 [2024-11-28 05:08:10.054854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.791 [2024-11-28 05:08:10.054906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:40.791 [2024-11-28 05:08:10.054919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.929 ms 00:18:40.791 [2024-11-28 05:08:10.054929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.791 [2024-11-28 05:08:10.055060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.791 [2024-11-28 05:08:10.055083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:40.791 [2024-11-28 05:08:10.055097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:40.791 [2024-11-28 05:08:10.055110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.079647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.079724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:41.054 [2024-11-28 05:08:10.079743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.510 ms 00:18:41.054 [2024-11-28 05:08:10.079756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.079892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.079917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:41.054 [2024-11-28 05:08:10.079931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:41.054 [2024-11-28 05:08:10.079943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.080571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.080630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:41.054 [2024-11-28 05:08:10.080649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:18:41.054 [2024-11-28 05:08:10.080663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.080881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.080899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:41.054 [2024-11-28 05:08:10.080912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:18:41.054 [2024-11-28 05:08:10.080924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.089398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.089442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:41.054 [2024-11-28 05:08:10.089459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.440 ms 00:18:41.054 [2024-11-28 05:08:10.089467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.093320] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:41.054 [2024-11-28 05:08:10.093374] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:41.054 [2024-11-28 05:08:10.093387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.093395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:41.054 [2024-11-28 05:08:10.093404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.814 ms 00:18:41.054 [2024-11-28 05:08:10.093412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.109382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.109436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:41.054 [2024-11-28 05:08:10.109448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.905 ms 00:18:41.054 [2024-11-28 05:08:10.109456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.112647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.112695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:41.054 [2024-11-28 05:08:10.112706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.122 ms 00:18:41.054 [2024-11-28 05:08:10.112713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.115232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.115274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:41.054 [2024-11-28 05:08:10.115283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.416 ms 00:18:41.054 [2024-11-28 05:08:10.115290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.115630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.115654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:41.054 [2024-11-28 05:08:10.115664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:18:41.054 [2024-11-28 05:08:10.115672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.140475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.140536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:41.054 [2024-11-28 05:08:10.140549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.772 ms 00:18:41.054 [2024-11-28 05:08:10.140557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.149193] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:41.054 [2024-11-28 05:08:10.168453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.168507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:41.054 [2024-11-28 05:08:10.168531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.809 ms 00:18:41.054 [2024-11-28 05:08:10.168544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.168647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.168661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:41.054 [2024-11-28 05:08:10.168670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:41.054 [2024-11-28 05:08:10.168686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.168744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.168754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:41.054 [2024-11-28 05:08:10.168763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:41.054 [2024-11-28 05:08:10.168770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.168795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.168803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:41.054 [2024-11-28 05:08:10.168812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:41.054 [2024-11-28 05:08:10.168820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.168862] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:41.054 [2024-11-28 05:08:10.168880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.168888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:41.054 [2024-11-28 05:08:10.168897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:41.054 [2024-11-28 05:08:10.168905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.175073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.175126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:41.054 [2024-11-28 05:08:10.175137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.146 ms 00:18:41.054 [2024-11-28 05:08:10.175146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.175268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.054 [2024-11-28 05:08:10.175284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:41.054 [2024-11-28 05:08:10.175294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:41.054 [2024-11-28 05:08:10.175303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.054 [2024-11-28 05:08:10.176601] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:41.054 [2024-11-28 05:08:10.178017] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.085 ms, result 0 00:18:41.054 [2024-11-28 05:08:10.179464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:41.054 [2024-11-28 05:08:10.186675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:41.991  [2024-11-28T05:08:12.208Z] Copying: 16/256 [MB] (16 MBps) [2024-11-28T05:08:13.584Z] Copying: 55/256 [MB] (38 MBps) [2024-11-28T05:08:14.530Z] Copying: 93/256 [MB] (38 MBps) [2024-11-28T05:08:15.468Z] Copying: 119/256 [MB] (26 MBps) [2024-11-28T05:08:16.404Z] Copying: 139/256 [MB] (19 MBps) [2024-11-28T05:08:17.345Z] Copying: 174/256 [MB] (35 MBps) [2024-11-28T05:08:18.281Z] Copying: 187/256 [MB] (12 MBps) [2024-11-28T05:08:19.225Z] Copying: 216/256 [MB] (29 MBps) [2024-11-28T05:08:20.611Z] Copying: 233/256 [MB] (16 MBps) [2024-11-28T05:08:20.611Z] Copying: 250/256 [MB] (17 MBps) [2024-11-28T05:08:20.611Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-28 05:08:20.538372] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:51.327 [2024-11-28 05:08:20.540232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.327 [2024-11-28 05:08:20.540288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:51.327 [2024-11-28 05:08:20.540302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:51.327 [2024-11-28 05:08:20.540311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.327 [2024-11-28 05:08:20.540334] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:51.327 [2024-11-28 05:08:20.540981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.327 [2024-11-28 05:08:20.541018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:51.327 [2024-11-28 05:08:20.541028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:18:51.327 [2024-11-28 05:08:20.541037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.327 [2024-11-28 05:08:20.544055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.327 [2024-11-28 05:08:20.544102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:51.327 [2024-11-28 05:08:20.544113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.992 ms 00:18:51.327 [2024-11-28 05:08:20.544128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.327 [2024-11-28 05:08:20.551845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.327 [2024-11-28 05:08:20.551893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:51.327 [2024-11-28 05:08:20.551904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.698 ms 00:18:51.327 [2024-11-28 05:08:20.551912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.327 [2024-11-28 05:08:20.558871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.327 [2024-11-28 05:08:20.558912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:51.327 [2024-11-28 05:08:20.558924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.903 ms 00:18:51.327 [2024-11-28 05:08:20.558933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.327 [2024-11-28 05:08:20.561946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.327 [2024-11-28 05:08:20.561995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:51.327 [2024-11-28 05:08:20.562005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.961 ms 00:18:51.327 [2024-11-28 05:08:20.562012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.327 [2024-11-28 05:08:20.567052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.327 [2024-11-28 05:08:20.567109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:51.327 [2024-11-28 05:08:20.567119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.981 ms 00:18:51.327 [2024-11-28 05:08:20.567127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.327 [2024-11-28 05:08:20.567258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.327 [2024-11-28 05:08:20.567281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:51.327 [2024-11-28 05:08:20.567295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:51.327 [2024-11-28 05:08:20.567306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.327 [2024-11-28 05:08:20.570711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.327 [2024-11-28 05:08:20.570757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:51.327 [2024-11-28 05:08:20.570768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.384 ms 00:18:51.327 [2024-11-28 05:08:20.570776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.327 [2024-11-28 05:08:20.574003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.327 [2024-11-28 05:08:20.574051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:51.327 [2024-11-28 05:08:20.574060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.183 ms 00:18:51.327 [2024-11-28 05:08:20.574067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.327 [2024-11-28 05:08:20.576608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.327 [2024-11-28 05:08:20.576655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:51.327 [2024-11-28 05:08:20.576663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.498 ms 00:18:51.327 [2024-11-28 05:08:20.576670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.327 [2024-11-28 05:08:20.579000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.327 [2024-11-28 05:08:20.579048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:51.327 [2024-11-28 05:08:20.579057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.256 ms 00:18:51.327 [2024-11-28 05:08:20.579064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.327 [2024-11-28 05:08:20.579104] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:51.327 [2024-11-28 05:08:20.579129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:51.327 [2024-11-28 05:08:20.579818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:51.328 [2024-11-28 05:08:20.579926] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:51.328 [2024-11-28 05:08:20.579934] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d2392ca9-dc0a-4073-94e9-23a6ae314a67 00:18:51.328 [2024-11-28 05:08:20.579943] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:51.328 [2024-11-28 05:08:20.579951] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:51.328 [2024-11-28 05:08:20.579959] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:51.328 [2024-11-28 05:08:20.579967] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:51.328 [2024-11-28 05:08:20.579974] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:51.328 [2024-11-28 05:08:20.579982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:51.328 [2024-11-28 05:08:20.579990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:51.328 [2024-11-28 05:08:20.579997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:51.328 [2024-11-28 05:08:20.580003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:51.328 [2024-11-28 05:08:20.580010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.328 [2024-11-28 05:08:20.580025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:51.328 [2024-11-28 05:08:20.580034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:18:51.328 [2024-11-28 05:08:20.580042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.328 [2024-11-28 05:08:20.582502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.328 [2024-11-28 05:08:20.582539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:51.328 [2024-11-28 05:08:20.582549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.442 ms 00:18:51.328 [2024-11-28 05:08:20.582557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.328 [2024-11-28 05:08:20.582699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.328 [2024-11-28 05:08:20.582708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:51.328 [2024-11-28 05:08:20.582717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:51.328 [2024-11-28 05:08:20.582725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.328 [2024-11-28 05:08:20.590756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.328 [2024-11-28 05:08:20.590807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:51.328 [2024-11-28 05:08:20.590818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.328 [2024-11-28 05:08:20.590826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.328 [2024-11-28 05:08:20.590895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.328 [2024-11-28 05:08:20.590904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:51.328 [2024-11-28 05:08:20.590912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.328 [2024-11-28 05:08:20.590919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.328 [2024-11-28 05:08:20.590968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.328 [2024-11-28 05:08:20.590978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:51.328 [2024-11-28 05:08:20.590991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.328 [2024-11-28 05:08:20.590999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.328 [2024-11-28 05:08:20.591020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.328 [2024-11-28 05:08:20.591029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:51.328 [2024-11-28 05:08:20.591037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.328 [2024-11-28 05:08:20.591044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.328 [2024-11-28 05:08:20.604992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.328 [2024-11-28 05:08:20.605048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:51.328 [2024-11-28 05:08:20.605060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.328 [2024-11-28 05:08:20.605069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.588 [2024-11-28 05:08:20.616367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.588 [2024-11-28 05:08:20.616421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:51.588 [2024-11-28 05:08:20.616432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.588 [2024-11-28 05:08:20.616449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.588 [2024-11-28 05:08:20.616539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.588 [2024-11-28 05:08:20.616550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:51.588 [2024-11-28 05:08:20.616559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.588 [2024-11-28 05:08:20.616568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.588 [2024-11-28 05:08:20.616601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.588 [2024-11-28 05:08:20.616611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:51.588 [2024-11-28 05:08:20.616622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.588 [2024-11-28 05:08:20.616630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.588 [2024-11-28 05:08:20.616703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.588 [2024-11-28 05:08:20.616717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:51.589 [2024-11-28 05:08:20.616726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.589 [2024-11-28 05:08:20.616741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.589 [2024-11-28 05:08:20.616773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.589 [2024-11-28 05:08:20.616782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:51.589 [2024-11-28 05:08:20.616793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.589 [2024-11-28 05:08:20.616801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.589 [2024-11-28 05:08:20.616843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.589 [2024-11-28 05:08:20.616853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:51.589 [2024-11-28 05:08:20.616861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.589 [2024-11-28 05:08:20.616869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.589 [2024-11-28 05:08:20.616915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:51.589 [2024-11-28 05:08:20.616926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:51.589 [2024-11-28 05:08:20.616938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:51.589 [2024-11-28 05:08:20.616945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.589 [2024-11-28 05:08:20.617097] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 76.869 ms, result 0 00:18:51.850 00:18:51.850 00:18:51.850 05:08:21 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=87502 00:18:51.850 05:08:21 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 87502 00:18:51.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.850 05:08:21 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87502 ']' 00:18:51.850 05:08:21 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.850 05:08:21 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.850 05:08:21 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.850 05:08:21 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.850 05:08:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:51.850 05:08:21 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:52.112 [2024-11-28 05:08:21.174806] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:18:52.112 [2024-11-28 05:08:21.174967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87502 ] 00:18:52.112 [2024-11-28 05:08:21.321543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.112 [2024-11-28 05:08:21.349801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.053 05:08:21 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.053 05:08:21 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:53.053 05:08:21 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:53.053 [2024-11-28 05:08:22.183373] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:53.053 [2024-11-28 05:08:22.183469] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:53.317 [2024-11-28 05:08:22.360390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.317 [2024-11-28 05:08:22.360458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:53.317 [2024-11-28 05:08:22.360473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:53.317 [2024-11-28 05:08:22.360489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.317 [2024-11-28 05:08:22.363125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.317 [2024-11-28 05:08:22.363204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:53.317 [2024-11-28 05:08:22.363216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.615 ms 00:18:53.317 [2024-11-28 05:08:22.363225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.317 [2024-11-28 05:08:22.363350] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:53.317 [2024-11-28 05:08:22.363615] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:53.317 [2024-11-28 05:08:22.363639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.317 [2024-11-28 05:08:22.363650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:53.317 [2024-11-28 05:08:22.363659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:18:53.317 [2024-11-28 05:08:22.363669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.317 [2024-11-28 05:08:22.365551] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:53.317 [2024-11-28 05:08:22.369271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.317 [2024-11-28 05:08:22.369322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:53.317 [2024-11-28 05:08:22.369335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.717 ms 00:18:53.317 [2024-11-28 05:08:22.369344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.317 [2024-11-28 05:08:22.369428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.317 [2024-11-28 05:08:22.369439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:53.317 [2024-11-28 05:08:22.369456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:53.317 [2024-11-28 05:08:22.369464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.317 [2024-11-28 05:08:22.377370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.317 [2024-11-28 05:08:22.377410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:53.317 [2024-11-28 05:08:22.377428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.849 ms 00:18:53.317 [2024-11-28 05:08:22.377439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.317 [2024-11-28 05:08:22.377576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.317 [2024-11-28 05:08:22.377592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:53.317 [2024-11-28 05:08:22.377605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:53.317 [2024-11-28 05:08:22.377613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.318 [2024-11-28 05:08:22.377645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.318 [2024-11-28 05:08:22.377656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:53.318 [2024-11-28 05:08:22.377669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:53.318 [2024-11-28 05:08:22.377690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.318 [2024-11-28 05:08:22.377720] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:53.318 [2024-11-28 05:08:22.379744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.318 [2024-11-28 05:08:22.379789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:53.318 [2024-11-28 05:08:22.379802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.032 ms 00:18:53.318 [2024-11-28 05:08:22.379812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.318 [2024-11-28 05:08:22.379852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.318 [2024-11-28 05:08:22.379863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:53.318 [2024-11-28 05:08:22.379871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:53.318 [2024-11-28 05:08:22.379881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.318 [2024-11-28 05:08:22.379902] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:53.318 [2024-11-28 05:08:22.379925] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:53.318 [2024-11-28 05:08:22.379963] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:53.318 [2024-11-28 05:08:22.379984] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:53.318 [2024-11-28 05:08:22.380089] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:53.318 [2024-11-28 05:08:22.380103] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:53.318 [2024-11-28 05:08:22.380114] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:53.318 [2024-11-28 05:08:22.380127] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:53.318 [2024-11-28 05:08:22.380136] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:53.318 [2024-11-28 05:08:22.380151] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:53.318 [2024-11-28 05:08:22.380159] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:53.318 [2024-11-28 05:08:22.380173] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:53.318 [2024-11-28 05:08:22.380198] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:53.318 [2024-11-28 05:08:22.380207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.318 [2024-11-28 05:08:22.380220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:53.318 [2024-11-28 05:08:22.380229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:18:53.318 [2024-11-28 05:08:22.380237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.318 [2024-11-28 05:08:22.380326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.318 [2024-11-28 05:08:22.380344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:53.318 [2024-11-28 05:08:22.380354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:53.318 [2024-11-28 05:08:22.380362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.318 [2024-11-28 05:08:22.380467] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:53.318 [2024-11-28 05:08:22.380479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:53.318 [2024-11-28 05:08:22.380494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.318 [2024-11-28 05:08:22.380503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.318 [2024-11-28 05:08:22.380516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:53.318 [2024-11-28 05:08:22.380525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:53.318 [2024-11-28 05:08:22.380535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:53.318 [2024-11-28 05:08:22.380543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:53.318 [2024-11-28 05:08:22.380553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:53.318 [2024-11-28 05:08:22.380561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.318 [2024-11-28 05:08:22.380572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:53.318 [2024-11-28 05:08:22.380581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:53.318 [2024-11-28 05:08:22.380591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.318 [2024-11-28 05:08:22.380599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:53.318 [2024-11-28 05:08:22.380610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:53.318 [2024-11-28 05:08:22.380618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.318 [2024-11-28 05:08:22.380628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:53.318 [2024-11-28 05:08:22.380637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:53.318 [2024-11-28 05:08:22.380647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.318 [2024-11-28 05:08:22.380655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:53.318 [2024-11-28 05:08:22.380666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:53.318 [2024-11-28 05:08:22.380674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.318 [2024-11-28 05:08:22.380684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:53.318 [2024-11-28 05:08:22.380692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:53.318 [2024-11-28 05:08:22.380701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.318 [2024-11-28 05:08:22.380709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:53.318 [2024-11-28 05:08:22.380719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:53.318 [2024-11-28 05:08:22.380727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.318 [2024-11-28 05:08:22.380737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:53.318 [2024-11-28 05:08:22.380745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:53.318 [2024-11-28 05:08:22.380754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.318 [2024-11-28 05:08:22.380762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:53.318 [2024-11-28 05:08:22.380772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:53.318 [2024-11-28 05:08:22.380779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.318 [2024-11-28 05:08:22.380789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:53.318 [2024-11-28 05:08:22.380796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:53.318 [2024-11-28 05:08:22.380809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.318 [2024-11-28 05:08:22.380818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:53.318 [2024-11-28 05:08:22.380828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:53.318 [2024-11-28 05:08:22.380836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.318 [2024-11-28 05:08:22.380846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:53.318 [2024-11-28 05:08:22.380853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:53.318 [2024-11-28 05:08:22.380863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.318 [2024-11-28 05:08:22.380871] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:53.319 [2024-11-28 05:08:22.380885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:53.319 [2024-11-28 05:08:22.380894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.319 [2024-11-28 05:08:22.380906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.319 [2024-11-28 05:08:22.380914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:53.319 [2024-11-28 05:08:22.380923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:53.319 [2024-11-28 05:08:22.380930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:53.319 [2024-11-28 05:08:22.380939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:53.319 [2024-11-28 05:08:22.380946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:53.319 [2024-11-28 05:08:22.380956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:53.319 [2024-11-28 05:08:22.380965] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:53.319 [2024-11-28 05:08:22.380976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.319 [2024-11-28 05:08:22.380987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:53.319 [2024-11-28 05:08:22.380996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:53.319 [2024-11-28 05:08:22.381004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:53.319 [2024-11-28 05:08:22.381012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:53.319 [2024-11-28 05:08:22.381019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:53.319 [2024-11-28 05:08:22.381029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:53.319 [2024-11-28 05:08:22.381036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:53.319 [2024-11-28 05:08:22.381053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:53.319 [2024-11-28 05:08:22.381060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:53.319 [2024-11-28 05:08:22.381069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:53.319 [2024-11-28 05:08:22.381077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:53.319 [2024-11-28 05:08:22.381091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:53.319 [2024-11-28 05:08:22.381098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:53.319 [2024-11-28 05:08:22.381111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:53.319 [2024-11-28 05:08:22.381119] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:53.319 [2024-11-28 05:08:22.381129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.319 [2024-11-28 05:08:22.381138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:53.319 [2024-11-28 05:08:22.381148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:53.319 [2024-11-28 05:08:22.381156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:53.319 [2024-11-28 05:08:22.381166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:53.319 [2024-11-28 05:08:22.381173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.381227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:53.319 [2024-11-28 05:08:22.381238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:18:53.319 [2024-11-28 05:08:22.381252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.394972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.395021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:53.319 [2024-11-28 05:08:22.395037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.659 ms 00:18:53.319 [2024-11-28 05:08:22.395047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.395197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.395214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:53.319 [2024-11-28 05:08:22.395223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:53.319 [2024-11-28 05:08:22.395234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.407655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.407710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:53.319 [2024-11-28 05:08:22.407721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.398 ms 00:18:53.319 [2024-11-28 05:08:22.407733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.407801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.407814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:53.319 [2024-11-28 05:08:22.407823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:53.319 [2024-11-28 05:08:22.407833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.408402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.408444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:53.319 [2024-11-28 05:08:22.408456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:18:53.319 [2024-11-28 05:08:22.408467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.408629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.408645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:53.319 [2024-11-28 05:08:22.408655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:18:53.319 [2024-11-28 05:08:22.408666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.416897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.416949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:53.319 [2024-11-28 05:08:22.416960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.207 ms 00:18:53.319 [2024-11-28 05:08:22.416969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.434103] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:53.319 [2024-11-28 05:08:22.434241] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:53.319 [2024-11-28 05:08:22.434279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.434306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:53.319 [2024-11-28 05:08:22.434332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.198 ms 00:18:53.319 [2024-11-28 05:08:22.434356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.453232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.453291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:53.319 [2024-11-28 05:08:22.453306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.762 ms 00:18:53.319 [2024-11-28 05:08:22.453319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.456355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.456406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:53.319 [2024-11-28 05:08:22.456416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.912 ms 00:18:53.319 [2024-11-28 05:08:22.456425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.458885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.458936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:53.319 [2024-11-28 05:08:22.458946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.407 ms 00:18:53.319 [2024-11-28 05:08:22.458955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.459329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.459346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:53.319 [2024-11-28 05:08:22.459356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:18:53.319 [2024-11-28 05:08:22.459365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.319 [2024-11-28 05:08:22.484391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.319 [2024-11-28 05:08:22.484452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:53.319 [2024-11-28 05:08:22.484465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.003 ms 00:18:53.319 [2024-11-28 05:08:22.484479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.320 [2024-11-28 05:08:22.493282] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:53.320 [2024-11-28 05:08:22.511953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.320 [2024-11-28 05:08:22.512006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:53.320 [2024-11-28 05:08:22.512020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.393 ms 00:18:53.320 [2024-11-28 05:08:22.512030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.320 [2024-11-28 05:08:22.512121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.320 [2024-11-28 05:08:22.512135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:53.320 [2024-11-28 05:08:22.512147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:53.320 [2024-11-28 05:08:22.512155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.320 [2024-11-28 05:08:22.512247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.320 [2024-11-28 05:08:22.512258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:53.320 [2024-11-28 05:08:22.512270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:53.320 [2024-11-28 05:08:22.512278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.320 [2024-11-28 05:08:22.512307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.320 [2024-11-28 05:08:22.512315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:53.320 [2024-11-28 05:08:22.512338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:53.320 [2024-11-28 05:08:22.512346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.320 [2024-11-28 05:08:22.512384] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:53.320 [2024-11-28 05:08:22.512394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.320 [2024-11-28 05:08:22.512405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:53.320 [2024-11-28 05:08:22.512413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:53.320 [2024-11-28 05:08:22.512422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.320 [2024-11-28 05:08:22.518334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.320 [2024-11-28 05:08:22.518389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:53.320 [2024-11-28 05:08:22.518399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.889 ms 00:18:53.320 [2024-11-28 05:08:22.518412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.320 [2024-11-28 05:08:22.518505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.320 [2024-11-28 05:08:22.518518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:53.320 [2024-11-28 05:08:22.518527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:53.320 [2024-11-28 05:08:22.518543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.320 [2024-11-28 05:08:22.519702] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:53.320 [2024-11-28 05:08:22.521094] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.984 ms, result 0 00:18:53.320 [2024-11-28 05:08:22.523090] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:53.320 Some configs were skipped because the RPC state that can call them passed over. 00:18:53.320 05:08:22 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:53.582 [2024-11-28 05:08:22.760431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.582 [2024-11-28 05:08:22.760485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:53.582 [2024-11-28 05:08:22.760501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:18:53.582 [2024-11-28 05:08:22.760510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.582 [2024-11-28 05:08:22.760547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.827 ms, result 0 00:18:53.582 true 00:18:53.582 05:08:22 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:53.843 [2024-11-28 05:08:22.980430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.843 [2024-11-28 05:08:22.980484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:53.843 [2024-11-28 05:08:22.980495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.470 ms 00:18:53.843 [2024-11-28 05:08:22.980506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.843 [2024-11-28 05:08:22.980543] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.581 ms, result 0 00:18:53.843 true 00:18:53.843 05:08:22 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 87502 00:18:53.843 05:08:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87502 ']' 00:18:53.843 05:08:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87502 00:18:53.843 05:08:23 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:53.843 05:08:23 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.843 05:08:23 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87502 00:18:53.843 killing process with pid 87502 00:18:53.843 05:08:23 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.843 05:08:23 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.843 05:08:23 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87502' 00:18:53.843 05:08:23 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87502 00:18:53.843 05:08:23 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87502 00:18:54.107 [2024-11-28 05:08:23.157600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.107 [2024-11-28 05:08:23.157664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:54.107 [2024-11-28 05:08:23.157690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:54.107 [2024-11-28 05:08:23.157700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.107 [2024-11-28 05:08:23.157726] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:54.107 [2024-11-28 05:08:23.158298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.107 [2024-11-28 05:08:23.158329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:54.107 [2024-11-28 05:08:23.158339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:18:54.107 [2024-11-28 05:08:23.158348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.107 [2024-11-28 05:08:23.158646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.107 [2024-11-28 05:08:23.158667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:54.107 [2024-11-28 05:08:23.158676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:18:54.107 [2024-11-28 05:08:23.158686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.107 [2024-11-28 05:08:23.163235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.107 [2024-11-28 05:08:23.163277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:54.107 [2024-11-28 05:08:23.163287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.528 ms 00:18:54.107 [2024-11-28 05:08:23.163299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.107 [2024-11-28 05:08:23.170314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.107 [2024-11-28 05:08:23.170366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:54.107 [2024-11-28 05:08:23.170376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.973 ms 00:18:54.107 [2024-11-28 05:08:23.170391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.107 [2024-11-28 05:08:23.172342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.107 [2024-11-28 05:08:23.172387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:54.107 [2024-11-28 05:08:23.172397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.870 ms 00:18:54.107 [2024-11-28 05:08:23.172406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.107 [2024-11-28 05:08:23.176585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.107 [2024-11-28 05:08:23.176629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:54.107 [2024-11-28 05:08:23.176642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.138 ms 00:18:54.107 [2024-11-28 05:08:23.176652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.107 [2024-11-28 05:08:23.176784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.107 [2024-11-28 05:08:23.176797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:54.107 [2024-11-28 05:08:23.176805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:54.107 [2024-11-28 05:08:23.176814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.107 [2024-11-28 05:08:23.179488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.107 [2024-11-28 05:08:23.179534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:54.107 [2024-11-28 05:08:23.179544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.654 ms 00:18:54.107 [2024-11-28 05:08:23.179555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.107 [2024-11-28 05:08:23.181210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.107 [2024-11-28 05:08:23.181249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:54.107 [2024-11-28 05:08:23.181257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:18:54.107 [2024-11-28 05:08:23.181266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.107 [2024-11-28 05:08:23.182607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.107 [2024-11-28 05:08:23.182647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:54.107 [2024-11-28 05:08:23.182655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.304 ms 00:18:54.107 [2024-11-28 05:08:23.182664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.107 [2024-11-28 05:08:23.184042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.107 [2024-11-28 05:08:23.184083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:54.107 [2024-11-28 05:08:23.184092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:18:54.107 [2024-11-28 05:08:23.184101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.107 [2024-11-28 05:08:23.184134] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:54.107 [2024-11-28 05:08:23.184150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:54.107 [2024-11-28 05:08:23.184160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:54.107 [2024-11-28 05:08:23.184172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:54.107 [2024-11-28 05:08:23.184199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:54.107 [2024-11-28 05:08:23.184210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:54.107 [2024-11-28 05:08:23.184217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:54.108 [2024-11-28 05:08:23.184850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.184991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.185000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.185007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.185018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.185025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:54.109 [2024-11-28 05:08:23.185043] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:54.109 [2024-11-28 05:08:23.185050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d2392ca9-dc0a-4073-94e9-23a6ae314a67 00:18:54.109 [2024-11-28 05:08:23.185062] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:54.109 [2024-11-28 05:08:23.185069] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:54.109 [2024-11-28 05:08:23.185079] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:54.109 [2024-11-28 05:08:23.185087] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:54.109 [2024-11-28 05:08:23.185096] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:54.109 [2024-11-28 05:08:23.185106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:54.109 [2024-11-28 05:08:23.185115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:54.109 [2024-11-28 05:08:23.185122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:54.109 [2024-11-28 05:08:23.185131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:54.109 [2024-11-28 05:08:23.185137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.109 [2024-11-28 05:08:23.185146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:54.109 [2024-11-28 05:08:23.185155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:18:54.109 [2024-11-28 05:08:23.185165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.109 [2024-11-28 05:08:23.186938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.109 [2024-11-28 05:08:23.186977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:54.109 [2024-11-28 05:08:23.186987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.739 ms 00:18:54.109 [2024-11-28 05:08:23.186996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.109 [2024-11-28 05:08:23.187107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.109 [2024-11-28 05:08:23.187119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:54.109 [2024-11-28 05:08:23.187126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:54.109 [2024-11-28 05:08:23.187135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.109 [2024-11-28 05:08:23.193561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.109 [2024-11-28 05:08:23.193604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:54.109 [2024-11-28 05:08:23.193618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.109 [2024-11-28 05:08:23.193628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.109 [2024-11-28 05:08:23.193720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.109 [2024-11-28 05:08:23.193733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:54.109 [2024-11-28 05:08:23.193741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.109 [2024-11-28 05:08:23.193753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.109 [2024-11-28 05:08:23.193803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.109 [2024-11-28 05:08:23.193815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:54.109 [2024-11-28 05:08:23.193823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.109 [2024-11-28 05:08:23.193833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.109 [2024-11-28 05:08:23.193852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.109 [2024-11-28 05:08:23.193862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:54.109 [2024-11-28 05:08:23.193870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.109 [2024-11-28 05:08:23.193879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.109 [2024-11-28 05:08:23.206063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.109 [2024-11-28 05:08:23.206118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:54.109 [2024-11-28 05:08:23.206128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.109 [2024-11-28 05:08:23.206145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.109 [2024-11-28 05:08:23.215417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.109 [2024-11-28 05:08:23.215476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:54.109 [2024-11-28 05:08:23.215487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.109 [2024-11-28 05:08:23.215500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.109 [2024-11-28 05:08:23.215552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.109 [2024-11-28 05:08:23.215564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:54.109 [2024-11-28 05:08:23.215573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.109 [2024-11-28 05:08:23.215583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.109 [2024-11-28 05:08:23.215617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.109 [2024-11-28 05:08:23.215628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:54.109 [2024-11-28 05:08:23.215636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.109 [2024-11-28 05:08:23.215645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.109 [2024-11-28 05:08:23.215719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.109 [2024-11-28 05:08:23.215734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:54.109 [2024-11-28 05:08:23.215742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.109 [2024-11-28 05:08:23.215751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.109 [2024-11-28 05:08:23.215787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.110 [2024-11-28 05:08:23.215803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:54.110 [2024-11-28 05:08:23.215811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.110 [2024-11-28 05:08:23.215823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.110 [2024-11-28 05:08:23.215863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.110 [2024-11-28 05:08:23.215876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:54.110 [2024-11-28 05:08:23.215884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.110 [2024-11-28 05:08:23.215894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.110 [2024-11-28 05:08:23.215941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:54.110 [2024-11-28 05:08:23.215962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:54.110 [2024-11-28 05:08:23.215971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:54.110 [2024-11-28 05:08:23.215981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.110 [2024-11-28 05:08:23.216124] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.496 ms, result 0 00:18:54.371 05:08:23 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:54.371 05:08:23 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:54.371 [2024-11-28 05:08:23.499869] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:18:54.371 [2024-11-28 05:08:23.500346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87538 ] 00:18:54.371 [2024-11-28 05:08:23.651693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.633 [2024-11-28 05:08:23.681486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.633 [2024-11-28 05:08:23.797807] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:54.633 [2024-11-28 05:08:23.797889] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:54.897 [2024-11-28 05:08:23.958960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.897 [2024-11-28 05:08:23.959019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:54.897 [2024-11-28 05:08:23.959034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:54.897 [2024-11-28 05:08:23.959042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.897 [2024-11-28 05:08:23.961580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.897 [2024-11-28 05:08:23.961631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:54.897 [2024-11-28 05:08:23.961645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.516 ms 00:18:54.897 [2024-11-28 05:08:23.961656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.897 [2024-11-28 05:08:23.961777] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:54.897 [2024-11-28 05:08:23.962045] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:54.897 [2024-11-28 05:08:23.962063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.897 [2024-11-28 05:08:23.962072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:54.897 [2024-11-28 05:08:23.962082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:18:54.897 [2024-11-28 05:08:23.962090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.897 [2024-11-28 05:08:23.963963] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:54.897 [2024-11-28 05:08:23.967444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.897 [2024-11-28 05:08:23.967488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:54.897 [2024-11-28 05:08:23.967505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.483 ms 00:18:54.897 [2024-11-28 05:08:23.967513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.897 [2024-11-28 05:08:23.967596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.897 [2024-11-28 05:08:23.967606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:54.897 [2024-11-28 05:08:23.967615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:54.897 [2024-11-28 05:08:23.967624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.897 [2024-11-28 05:08:23.975584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.897 [2024-11-28 05:08:23.975622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:54.897 [2024-11-28 05:08:23.975633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.913 ms 00:18:54.897 [2024-11-28 05:08:23.975649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.897 [2024-11-28 05:08:23.975787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.897 [2024-11-28 05:08:23.975802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:54.897 [2024-11-28 05:08:23.975811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:54.897 [2024-11-28 05:08:23.975822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.897 [2024-11-28 05:08:23.975850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.897 [2024-11-28 05:08:23.975859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:54.897 [2024-11-28 05:08:23.975867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:54.897 [2024-11-28 05:08:23.975875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.897 [2024-11-28 05:08:23.975903] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:54.897 [2024-11-28 05:08:23.977929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.897 [2024-11-28 05:08:23.977961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:54.897 [2024-11-28 05:08:23.977973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.038 ms 00:18:54.897 [2024-11-28 05:08:23.977986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.897 [2024-11-28 05:08:23.978037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.897 [2024-11-28 05:08:23.978047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:54.897 [2024-11-28 05:08:23.978056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:54.897 [2024-11-28 05:08:23.978064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.897 [2024-11-28 05:08:23.978083] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:54.897 [2024-11-28 05:08:23.978102] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:54.897 [2024-11-28 05:08:23.978144] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:54.897 [2024-11-28 05:08:23.978165] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:54.897 [2024-11-28 05:08:23.978303] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:54.897 [2024-11-28 05:08:23.978317] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:54.897 [2024-11-28 05:08:23.978328] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:54.897 [2024-11-28 05:08:23.978339] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:54.897 [2024-11-28 05:08:23.978349] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:54.897 [2024-11-28 05:08:23.978358] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:54.897 [2024-11-28 05:08:23.978366] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:54.897 [2024-11-28 05:08:23.978374] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:54.897 [2024-11-28 05:08:23.978385] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:54.898 [2024-11-28 05:08:23.978395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.898 [2024-11-28 05:08:23.978404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:54.898 [2024-11-28 05:08:23.978413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:18:54.898 [2024-11-28 05:08:23.978420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.898 [2024-11-28 05:08:23.978508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.898 [2024-11-28 05:08:23.978516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:54.898 [2024-11-28 05:08:23.978524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:54.898 [2024-11-28 05:08:23.978533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.898 [2024-11-28 05:08:23.978633] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:54.898 [2024-11-28 05:08:23.978647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:54.898 [2024-11-28 05:08:23.978657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.898 [2024-11-28 05:08:23.978666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.898 [2024-11-28 05:08:23.978676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:54.898 [2024-11-28 05:08:23.978683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:54.898 [2024-11-28 05:08:23.978692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:54.898 [2024-11-28 05:08:23.978704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:54.898 [2024-11-28 05:08:23.978712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:54.898 [2024-11-28 05:08:23.978720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.898 [2024-11-28 05:08:23.978729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:54.898 [2024-11-28 05:08:23.978737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:54.898 [2024-11-28 05:08:23.978744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.898 [2024-11-28 05:08:23.978752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:54.898 [2024-11-28 05:08:23.978760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:54.898 [2024-11-28 05:08:23.978768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.898 [2024-11-28 05:08:23.978775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:54.898 [2024-11-28 05:08:23.978783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:54.898 [2024-11-28 05:08:23.978790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.898 [2024-11-28 05:08:23.978798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:54.898 [2024-11-28 05:08:23.978806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:54.898 [2024-11-28 05:08:23.978814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.898 [2024-11-28 05:08:23.978821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:54.898 [2024-11-28 05:08:23.978834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:54.898 [2024-11-28 05:08:23.978842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.898 [2024-11-28 05:08:23.978850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:54.898 [2024-11-28 05:08:23.978860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:54.898 [2024-11-28 05:08:23.978868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.898 [2024-11-28 05:08:23.978875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:54.898 [2024-11-28 05:08:23.978883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:54.898 [2024-11-28 05:08:23.978891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.898 [2024-11-28 05:08:23.978899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:54.898 [2024-11-28 05:08:23.978907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:54.898 [2024-11-28 05:08:23.978914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.898 [2024-11-28 05:08:23.978921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:54.898 [2024-11-28 05:08:23.978929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:54.898 [2024-11-28 05:08:23.978937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.898 [2024-11-28 05:08:23.978944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:54.898 [2024-11-28 05:08:23.978951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:54.898 [2024-11-28 05:08:23.978960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.898 [2024-11-28 05:08:23.978966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:54.898 [2024-11-28 05:08:23.978973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:54.898 [2024-11-28 05:08:23.978980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.898 [2024-11-28 05:08:23.978986] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:54.898 [2024-11-28 05:08:23.978994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:54.898 [2024-11-28 05:08:23.979002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.898 [2024-11-28 05:08:23.979009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.898 [2024-11-28 05:08:23.979017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:54.898 [2024-11-28 05:08:23.979024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:54.898 [2024-11-28 05:08:23.979031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:54.898 [2024-11-28 05:08:23.979038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:54.898 [2024-11-28 05:08:23.979044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:54.898 [2024-11-28 05:08:23.979050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:54.898 [2024-11-28 05:08:23.979058] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:54.898 [2024-11-28 05:08:23.979068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.898 [2024-11-28 05:08:23.979080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:54.898 [2024-11-28 05:08:23.979088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:54.898 [2024-11-28 05:08:23.979095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:54.898 [2024-11-28 05:08:23.979105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:54.898 [2024-11-28 05:08:23.979112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:54.898 [2024-11-28 05:08:23.979119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:54.898 [2024-11-28 05:08:23.979127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:54.898 [2024-11-28 05:08:23.979140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:54.898 [2024-11-28 05:08:23.979147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:54.898 [2024-11-28 05:08:23.979155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:54.898 [2024-11-28 05:08:23.979162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:54.898 [2024-11-28 05:08:23.979169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:54.898 [2024-11-28 05:08:23.979190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:54.898 [2024-11-28 05:08:23.979198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:54.898 [2024-11-28 05:08:23.979206] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:54.898 [2024-11-28 05:08:23.979217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.898 [2024-11-28 05:08:23.979229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:54.898 [2024-11-28 05:08:23.979236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:54.898 [2024-11-28 05:08:23.979244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:54.898 [2024-11-28 05:08:23.979252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:54.898 [2024-11-28 05:08:23.979259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.898 [2024-11-28 05:08:23.979275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:54.898 [2024-11-28 05:08:23.979283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:18:54.898 [2024-11-28 05:08:23.979290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.898 [2024-11-28 05:08:23.993076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.898 [2024-11-28 05:08:23.993120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:54.898 [2024-11-28 05:08:23.993132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.734 ms 00:18:54.898 [2024-11-28 05:08:23.993141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.898 [2024-11-28 05:08:23.993300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.898 [2024-11-28 05:08:23.993317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:54.898 [2024-11-28 05:08:23.993327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:54.898 [2024-11-28 05:08:23.993334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.898 [2024-11-28 05:08:24.016643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.898 [2024-11-28 05:08:24.016698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:54.898 [2024-11-28 05:08:24.016715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.283 ms 00:18:54.899 [2024-11-28 05:08:24.016725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.016838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.016853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:54.899 [2024-11-28 05:08:24.016870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:54.899 [2024-11-28 05:08:24.016879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.017518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.017569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:54.899 [2024-11-28 05:08:24.017583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:18:54.899 [2024-11-28 05:08:24.017599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.017817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.017838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:54.899 [2024-11-28 05:08:24.017849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:18:54.899 [2024-11-28 05:08:24.017858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.026047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.026085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:54.899 [2024-11-28 05:08:24.026101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.160 ms 00:18:54.899 [2024-11-28 05:08:24.026109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.029886] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:54.899 [2024-11-28 05:08:24.029928] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:54.899 [2024-11-28 05:08:24.029940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.029948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:54.899 [2024-11-28 05:08:24.029957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.707 ms 00:18:54.899 [2024-11-28 05:08:24.029965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.045857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.045900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:54.899 [2024-11-28 05:08:24.045911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.832 ms 00:18:54.899 [2024-11-28 05:08:24.045921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.048601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.048642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:54.899 [2024-11-28 05:08:24.048652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.583 ms 00:18:54.899 [2024-11-28 05:08:24.048660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.051230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.051267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:54.899 [2024-11-28 05:08:24.051277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.509 ms 00:18:54.899 [2024-11-28 05:08:24.051284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.051654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.051676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:54.899 [2024-11-28 05:08:24.051686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:18:54.899 [2024-11-28 05:08:24.051695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.075200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.075256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:54.899 [2024-11-28 05:08:24.075269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.480 ms 00:18:54.899 [2024-11-28 05:08:24.075278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.083418] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:54.899 [2024-11-28 05:08:24.102464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.102523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:54.899 [2024-11-28 05:08:24.102536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.087 ms 00:18:54.899 [2024-11-28 05:08:24.102544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.102638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.102653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:54.899 [2024-11-28 05:08:24.102663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:54.899 [2024-11-28 05:08:24.102672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.102732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.102742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:54.899 [2024-11-28 05:08:24.102751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:54.899 [2024-11-28 05:08:24.102759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.102785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.102794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:54.899 [2024-11-28 05:08:24.102807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:54.899 [2024-11-28 05:08:24.102815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.102854] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:54.899 [2024-11-28 05:08:24.102866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.102875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:54.899 [2024-11-28 05:08:24.102884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:54.899 [2024-11-28 05:08:24.102892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.109079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.109125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:54.899 [2024-11-28 05:08:24.109136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.164 ms 00:18:54.899 [2024-11-28 05:08:24.109156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.109266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.899 [2024-11-28 05:08:24.109278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:54.899 [2024-11-28 05:08:24.109288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:54.899 [2024-11-28 05:08:24.109297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.899 [2024-11-28 05:08:24.110891] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:54.899 [2024-11-28 05:08:24.112316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.618 ms, result 0 00:18:54.899 [2024-11-28 05:08:24.113408] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:54.899 [2024-11-28 05:08:24.120948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:55.903  [2024-11-28T05:08:26.127Z] Copying: 18/256 [MB] (18 MBps) [2024-11-28T05:08:27.511Z] Copying: 33/256 [MB] (14 MBps) [2024-11-28T05:08:28.450Z] Copying: 44/256 [MB] (10 MBps) [2024-11-28T05:08:29.391Z] Copying: 58/256 [MB] (14 MBps) [2024-11-28T05:08:30.336Z] Copying: 81/256 [MB] (22 MBps) [2024-11-28T05:08:31.280Z] Copying: 92/256 [MB] (10 MBps) [2024-11-28T05:08:32.220Z] Copying: 103/256 [MB] (10 MBps) [2024-11-28T05:08:33.164Z] Copying: 115/256 [MB] (12 MBps) [2024-11-28T05:08:34.550Z] Copying: 135/256 [MB] (20 MBps) [2024-11-28T05:08:35.493Z] Copying: 151/256 [MB] (15 MBps) [2024-11-28T05:08:36.432Z] Copying: 167/256 [MB] (15 MBps) [2024-11-28T05:08:37.376Z] Copying: 185/256 [MB] (17 MBps) [2024-11-28T05:08:38.317Z] Copying: 209/256 [MB] (24 MBps) [2024-11-28T05:08:39.258Z] Copying: 228/256 [MB] (18 MBps) [2024-11-28T05:08:39.831Z] Copying: 245/256 [MB] (17 MBps) [2024-11-28T05:08:39.831Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-28 05:08:39.774404] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:10.547 [2024-11-28 05:08:39.776258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.547 [2024-11-28 05:08:39.776307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:10.547 [2024-11-28 05:08:39.776321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:10.547 [2024-11-28 05:08:39.776330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.547 [2024-11-28 05:08:39.776352] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:10.547 [2024-11-28 05:08:39.777000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.547 [2024-11-28 05:08:39.777033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:10.547 [2024-11-28 05:08:39.777051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:19:10.547 [2024-11-28 05:08:39.777060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.547 [2024-11-28 05:08:39.777342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.547 [2024-11-28 05:08:39.777358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:10.547 [2024-11-28 05:08:39.777368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:19:10.547 [2024-11-28 05:08:39.777377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.547 [2024-11-28 05:08:39.781078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.547 [2024-11-28 05:08:39.781102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:10.547 [2024-11-28 05:08:39.781112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:19:10.547 [2024-11-28 05:08:39.781120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.547 [2024-11-28 05:08:39.788087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.547 [2024-11-28 05:08:39.788128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:10.547 [2024-11-28 05:08:39.788146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.933 ms 00:19:10.547 [2024-11-28 05:08:39.788153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.547 [2024-11-28 05:08:39.791058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.547 [2024-11-28 05:08:39.791113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:10.547 [2024-11-28 05:08:39.791122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.845 ms 00:19:10.547 [2024-11-28 05:08:39.791131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.547 [2024-11-28 05:08:39.796074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.547 [2024-11-28 05:08:39.796127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:10.547 [2024-11-28 05:08:39.796147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.869 ms 00:19:10.547 [2024-11-28 05:08:39.796155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.547 [2024-11-28 05:08:39.796301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.547 [2024-11-28 05:08:39.796313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:10.547 [2024-11-28 05:08:39.796326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:10.547 [2024-11-28 05:08:39.796335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.547 [2024-11-28 05:08:39.799714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.547 [2024-11-28 05:08:39.799762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:10.547 [2024-11-28 05:08:39.799772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.360 ms 00:19:10.548 [2024-11-28 05:08:39.799779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.548 [2024-11-28 05:08:39.802821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.548 [2024-11-28 05:08:39.802872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:10.548 [2024-11-28 05:08:39.802881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.998 ms 00:19:10.548 [2024-11-28 05:08:39.802889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.548 [2024-11-28 05:08:39.805596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.548 [2024-11-28 05:08:39.805645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:10.548 [2024-11-28 05:08:39.805654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.664 ms 00:19:10.548 [2024-11-28 05:08:39.805660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.548 [2024-11-28 05:08:39.808135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.548 [2024-11-28 05:08:39.808205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:10.548 [2024-11-28 05:08:39.808215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.391 ms 00:19:10.548 [2024-11-28 05:08:39.808222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.548 [2024-11-28 05:08:39.808263] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:10.548 [2024-11-28 05:08:39.808279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:10.548 [2024-11-28 05:08:39.808720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.808997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.809006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.809014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.809021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.809028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.809036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.809043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:10.549 [2024-11-28 05:08:39.809058] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:10.549 [2024-11-28 05:08:39.809071] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d2392ca9-dc0a-4073-94e9-23a6ae314a67 00:19:10.549 [2024-11-28 05:08:39.809079] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:10.549 [2024-11-28 05:08:39.809087] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:10.549 [2024-11-28 05:08:39.809094] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:10.549 [2024-11-28 05:08:39.809102] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:10.549 [2024-11-28 05:08:39.809110] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:10.549 [2024-11-28 05:08:39.809140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:10.549 [2024-11-28 05:08:39.809148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:10.549 [2024-11-28 05:08:39.809154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:10.549 [2024-11-28 05:08:39.809161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:10.549 [2024-11-28 05:08:39.809169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.549 [2024-11-28 05:08:39.809190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:10.549 [2024-11-28 05:08:39.809199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:19:10.549 [2024-11-28 05:08:39.809210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.549 [2024-11-28 05:08:39.811461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.549 [2024-11-28 05:08:39.811499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:10.549 [2024-11-28 05:08:39.811511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.231 ms 00:19:10.549 [2024-11-28 05:08:39.811526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.549 [2024-11-28 05:08:39.811660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.549 [2024-11-28 05:08:39.811669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:10.549 [2024-11-28 05:08:39.811678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:10.549 [2024-11-28 05:08:39.811686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.549 [2024-11-28 05:08:39.819366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.549 [2024-11-28 05:08:39.819413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:10.549 [2024-11-28 05:08:39.819428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.549 [2024-11-28 05:08:39.819436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.549 [2024-11-28 05:08:39.819507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.549 [2024-11-28 05:08:39.819521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:10.549 [2024-11-28 05:08:39.819530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.549 [2024-11-28 05:08:39.819537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.550 [2024-11-28 05:08:39.819594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.550 [2024-11-28 05:08:39.819605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:10.550 [2024-11-28 05:08:39.819613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.550 [2024-11-28 05:08:39.819624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.550 [2024-11-28 05:08:39.819645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.550 [2024-11-28 05:08:39.819653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:10.550 [2024-11-28 05:08:39.819661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.550 [2024-11-28 05:08:39.819672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.810 [2024-11-28 05:08:39.833377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.810 [2024-11-28 05:08:39.833432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:10.810 [2024-11-28 05:08:39.833446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.810 [2024-11-28 05:08:39.833455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.810 [2024-11-28 05:08:39.843660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.810 [2024-11-28 05:08:39.843709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:10.810 [2024-11-28 05:08:39.843720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.810 [2024-11-28 05:08:39.843736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.810 [2024-11-28 05:08:39.843789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.810 [2024-11-28 05:08:39.843799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:10.810 [2024-11-28 05:08:39.843808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.810 [2024-11-28 05:08:39.843820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.810 [2024-11-28 05:08:39.843857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.810 [2024-11-28 05:08:39.843866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:10.810 [2024-11-28 05:08:39.843874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.810 [2024-11-28 05:08:39.843882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.810 [2024-11-28 05:08:39.843951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.810 [2024-11-28 05:08:39.843961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:10.810 [2024-11-28 05:08:39.843977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.810 [2024-11-28 05:08:39.843985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.810 [2024-11-28 05:08:39.844015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.810 [2024-11-28 05:08:39.844028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:10.810 [2024-11-28 05:08:39.844036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.810 [2024-11-28 05:08:39.844044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.810 [2024-11-28 05:08:39.844088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.810 [2024-11-28 05:08:39.844098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:10.810 [2024-11-28 05:08:39.844107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.810 [2024-11-28 05:08:39.844115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.810 [2024-11-28 05:08:39.844165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.810 [2024-11-28 05:08:39.844192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:10.810 [2024-11-28 05:08:39.844202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.810 [2024-11-28 05:08:39.844209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.811 [2024-11-28 05:08:39.844367] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.074 ms, result 0 00:19:10.811 00:19:10.811 00:19:10.811 05:08:40 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:10.811 05:08:40 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:11.382 05:08:40 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:11.643 [2024-11-28 05:08:40.691082] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:11.643 [2024-11-28 05:08:40.691247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87721 ] 00:19:11.643 [2024-11-28 05:08:40.835682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.643 [2024-11-28 05:08:40.864447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.904 [2024-11-28 05:08:40.981816] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:11.904 [2024-11-28 05:08:40.981908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:11.904 [2024-11-28 05:08:41.142230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.904 [2024-11-28 05:08:41.142282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:11.904 [2024-11-28 05:08:41.142297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:11.904 [2024-11-28 05:08:41.142310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.904 [2024-11-28 05:08:41.145122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.904 [2024-11-28 05:08:41.145210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:11.904 [2024-11-28 05:08:41.145224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:19:11.904 [2024-11-28 05:08:41.145232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.904 [2024-11-28 05:08:41.145368] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:11.904 [2024-11-28 05:08:41.145641] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:11.904 [2024-11-28 05:08:41.145690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.904 [2024-11-28 05:08:41.145699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:11.905 [2024-11-28 05:08:41.145709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:19:11.905 [2024-11-28 05:08:41.145722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.905 [2024-11-28 05:08:41.147511] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:11.905 [2024-11-28 05:08:41.151073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.905 [2024-11-28 05:08:41.151123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:11.905 [2024-11-28 05:08:41.151139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.565 ms 00:19:11.905 [2024-11-28 05:08:41.151149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.905 [2024-11-28 05:08:41.151263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.905 [2024-11-28 05:08:41.151276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:11.905 [2024-11-28 05:08:41.151285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:11.905 [2024-11-28 05:08:41.151292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.905 [2024-11-28 05:08:41.159223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.905 [2024-11-28 05:08:41.159262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:11.905 [2024-11-28 05:08:41.159272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.883 ms 00:19:11.905 [2024-11-28 05:08:41.159281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.905 [2024-11-28 05:08:41.159438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.905 [2024-11-28 05:08:41.159450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:11.905 [2024-11-28 05:08:41.159459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:11.905 [2024-11-28 05:08:41.159470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.905 [2024-11-28 05:08:41.159501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.905 [2024-11-28 05:08:41.159510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:11.905 [2024-11-28 05:08:41.159523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:11.905 [2024-11-28 05:08:41.159530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.905 [2024-11-28 05:08:41.159550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:11.905 [2024-11-28 05:08:41.161549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.905 [2024-11-28 05:08:41.161587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:11.905 [2024-11-28 05:08:41.161597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:19:11.905 [2024-11-28 05:08:41.161610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.905 [2024-11-28 05:08:41.161656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.905 [2024-11-28 05:08:41.161670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:11.905 [2024-11-28 05:08:41.161694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:11.905 [2024-11-28 05:08:41.161702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.905 [2024-11-28 05:08:41.161720] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:11.905 [2024-11-28 05:08:41.161740] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:11.905 [2024-11-28 05:08:41.161782] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:11.905 [2024-11-28 05:08:41.161801] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:11.905 [2024-11-28 05:08:41.161908] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:11.905 [2024-11-28 05:08:41.161919] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:11.905 [2024-11-28 05:08:41.161930] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:11.905 [2024-11-28 05:08:41.161941] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:11.905 [2024-11-28 05:08:41.161951] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:11.905 [2024-11-28 05:08:41.161962] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:11.905 [2024-11-28 05:08:41.161971] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:11.905 [2024-11-28 05:08:41.161983] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:11.905 [2024-11-28 05:08:41.161993] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:11.905 [2024-11-28 05:08:41.162003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.905 [2024-11-28 05:08:41.162011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:11.905 [2024-11-28 05:08:41.162019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:19:11.905 [2024-11-28 05:08:41.162026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.905 [2024-11-28 05:08:41.162114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.905 [2024-11-28 05:08:41.162133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:11.905 [2024-11-28 05:08:41.162142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:11.905 [2024-11-28 05:08:41.162149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.905 [2024-11-28 05:08:41.162272] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:11.905 [2024-11-28 05:08:41.162288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:11.905 [2024-11-28 05:08:41.162302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:11.905 [2024-11-28 05:08:41.162310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:11.905 [2024-11-28 05:08:41.162327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:11.905 [2024-11-28 05:08:41.162346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:11.905 [2024-11-28 05:08:41.162355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:11.905 [2024-11-28 05:08:41.162370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:11.905 [2024-11-28 05:08:41.162377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:11.905 [2024-11-28 05:08:41.162385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:11.905 [2024-11-28 05:08:41.162396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:11.905 [2024-11-28 05:08:41.162405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:11.905 [2024-11-28 05:08:41.162413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:11.905 [2024-11-28 05:08:41.162429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:11.905 [2024-11-28 05:08:41.162436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:11.905 [2024-11-28 05:08:41.162453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:11.905 [2024-11-28 05:08:41.162468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:11.905 [2024-11-28 05:08:41.162483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:11.905 [2024-11-28 05:08:41.162499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:11.905 [2024-11-28 05:08:41.162507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:11.905 [2024-11-28 05:08:41.162523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:11.905 [2024-11-28 05:08:41.162531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:11.905 [2024-11-28 05:08:41.162546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:11.905 [2024-11-28 05:08:41.162554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:11.905 [2024-11-28 05:08:41.162570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:11.905 [2024-11-28 05:08:41.162578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:11.905 [2024-11-28 05:08:41.162585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:11.905 [2024-11-28 05:08:41.162594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:11.905 [2024-11-28 05:08:41.162602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:11.905 [2024-11-28 05:08:41.162613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:11.905 [2024-11-28 05:08:41.162629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:11.905 [2024-11-28 05:08:41.162637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162645] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:11.905 [2024-11-28 05:08:41.162654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:11.905 [2024-11-28 05:08:41.162664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:11.905 [2024-11-28 05:08:41.162673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.905 [2024-11-28 05:08:41.162681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:11.905 [2024-11-28 05:08:41.162688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:11.905 [2024-11-28 05:08:41.162695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:11.906 [2024-11-28 05:08:41.162702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:11.906 [2024-11-28 05:08:41.162709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:11.906 [2024-11-28 05:08:41.162715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:11.906 [2024-11-28 05:08:41.162724] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:11.906 [2024-11-28 05:08:41.162733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:11.906 [2024-11-28 05:08:41.162743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:11.906 [2024-11-28 05:08:41.162751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:11.906 [2024-11-28 05:08:41.162758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:11.906 [2024-11-28 05:08:41.162765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:11.906 [2024-11-28 05:08:41.162772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:11.906 [2024-11-28 05:08:41.162780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:11.906 [2024-11-28 05:08:41.162787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:11.906 [2024-11-28 05:08:41.162802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:11.906 [2024-11-28 05:08:41.162809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:11.906 [2024-11-28 05:08:41.162816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:11.906 [2024-11-28 05:08:41.162823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:11.906 [2024-11-28 05:08:41.162830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:11.906 [2024-11-28 05:08:41.162838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:11.906 [2024-11-28 05:08:41.162845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:11.906 [2024-11-28 05:08:41.162853] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:11.906 [2024-11-28 05:08:41.162863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:11.906 [2024-11-28 05:08:41.162875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:11.906 [2024-11-28 05:08:41.162883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:11.906 [2024-11-28 05:08:41.162889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:11.906 [2024-11-28 05:08:41.162896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:11.906 [2024-11-28 05:08:41.162905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.906 [2024-11-28 05:08:41.162916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:11.906 [2024-11-28 05:08:41.162926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:19:11.906 [2024-11-28 05:08:41.162933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.906 [2024-11-28 05:08:41.176977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.906 [2024-11-28 05:08:41.177025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:11.906 [2024-11-28 05:08:41.177036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.991 ms 00:19:11.906 [2024-11-28 05:08:41.177044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.906 [2024-11-28 05:08:41.177196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.906 [2024-11-28 05:08:41.177213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:11.906 [2024-11-28 05:08:41.177222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:11.906 [2024-11-28 05:08:41.177230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.200381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.200436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.167 [2024-11-28 05:08:41.200452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.126 ms 00:19:12.167 [2024-11-28 05:08:41.200462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.200571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.200586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.167 [2024-11-28 05:08:41.200597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:12.167 [2024-11-28 05:08:41.200606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.201165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.201230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.167 [2024-11-28 05:08:41.201243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:19:12.167 [2024-11-28 05:08:41.201255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.201441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.201469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.167 [2024-11-28 05:08:41.201481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:19:12.167 [2024-11-28 05:08:41.201490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.210963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.211010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.167 [2024-11-28 05:08:41.211027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.447 ms 00:19:12.167 [2024-11-28 05:08:41.211040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.214918] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:12.167 [2024-11-28 05:08:41.214975] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:12.167 [2024-11-28 05:08:41.214988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.214996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:12.167 [2024-11-28 05:08:41.215005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.826 ms 00:19:12.167 [2024-11-28 05:08:41.215013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.231095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.231151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:12.167 [2024-11-28 05:08:41.231164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.022 ms 00:19:12.167 [2024-11-28 05:08:41.231172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.234040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.234091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:12.167 [2024-11-28 05:08:41.234101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.763 ms 00:19:12.167 [2024-11-28 05:08:41.234108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.236795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.236841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:12.167 [2024-11-28 05:08:41.236851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.572 ms 00:19:12.167 [2024-11-28 05:08:41.236858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.237222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.237242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:12.167 [2024-11-28 05:08:41.237258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:19:12.167 [2024-11-28 05:08:41.237269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.261857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.261921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:12.167 [2024-11-28 05:08:41.261935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.562 ms 00:19:12.167 [2024-11-28 05:08:41.261944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.270194] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:12.167 [2024-11-28 05:08:41.290398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.290459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:12.167 [2024-11-28 05:08:41.290474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.358 ms 00:19:12.167 [2024-11-28 05:08:41.290482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.290584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.290596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:12.167 [2024-11-28 05:08:41.290610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:12.167 [2024-11-28 05:08:41.290619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.290678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.167 [2024-11-28 05:08:41.290688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:12.167 [2024-11-28 05:08:41.290696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:12.167 [2024-11-28 05:08:41.290705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.167 [2024-11-28 05:08:41.290730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.168 [2024-11-28 05:08:41.290739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:12.168 [2024-11-28 05:08:41.290748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:12.168 [2024-11-28 05:08:41.290758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.168 [2024-11-28 05:08:41.290797] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:12.168 [2024-11-28 05:08:41.290808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.168 [2024-11-28 05:08:41.290816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:12.168 [2024-11-28 05:08:41.290825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:12.168 [2024-11-28 05:08:41.290833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.168 [2024-11-28 05:08:41.297057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.168 [2024-11-28 05:08:41.297112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:12.168 [2024-11-28 05:08:41.297124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.202 ms 00:19:12.168 [2024-11-28 05:08:41.297133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.168 [2024-11-28 05:08:41.297252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.168 [2024-11-28 05:08:41.297265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:12.168 [2024-11-28 05:08:41.297274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:12.168 [2024-11-28 05:08:41.297282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.168 [2024-11-28 05:08:41.298461] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:12.168 [2024-11-28 05:08:41.299900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.916 ms, result 0 00:19:12.168 [2024-11-28 05:08:41.301082] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.168 [2024-11-28 05:08:41.308624] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:12.430  [2024-11-28T05:08:41.714Z] Copying: 4096/4096 [kB] (average 18 MBps)[2024-11-28 05:08:41.525956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.430 [2024-11-28 05:08:41.527095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.430 [2024-11-28 05:08:41.527143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:12.430 [2024-11-28 05:08:41.527155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:12.430 [2024-11-28 05:08:41.527163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.430 [2024-11-28 05:08:41.527215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:12.430 [2024-11-28 05:08:41.527872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.430 [2024-11-28 05:08:41.527910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:12.430 [2024-11-28 05:08:41.527921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:19:12.430 [2024-11-28 05:08:41.527937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.430 [2024-11-28 05:08:41.530030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.430 [2024-11-28 05:08:41.530078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:12.430 [2024-11-28 05:08:41.530096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.067 ms 00:19:12.430 [2024-11-28 05:08:41.530104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.430 [2024-11-28 05:08:41.534543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.430 [2024-11-28 05:08:41.534581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:12.430 [2024-11-28 05:08:41.534591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.423 ms 00:19:12.430 [2024-11-28 05:08:41.534598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.430 [2024-11-28 05:08:41.541493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.430 [2024-11-28 05:08:41.541535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:12.430 [2024-11-28 05:08:41.541547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.861 ms 00:19:12.430 [2024-11-28 05:08:41.541562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.430 [2024-11-28 05:08:41.544206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.430 [2024-11-28 05:08:41.544251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:12.430 [2024-11-28 05:08:41.544261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:19:12.430 [2024-11-28 05:08:41.544268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.430 [2024-11-28 05:08:41.549774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.430 [2024-11-28 05:08:41.549823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:12.430 [2024-11-28 05:08:41.549834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.461 ms 00:19:12.430 [2024-11-28 05:08:41.549841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.430 [2024-11-28 05:08:41.549972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.430 [2024-11-28 05:08:41.549982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:12.430 [2024-11-28 05:08:41.549999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:12.430 [2024-11-28 05:08:41.550007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.430 [2024-11-28 05:08:41.553368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.430 [2024-11-28 05:08:41.553416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:12.430 [2024-11-28 05:08:41.553426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.342 ms 00:19:12.430 [2024-11-28 05:08:41.553432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.430 [2024-11-28 05:08:41.556259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.430 [2024-11-28 05:08:41.556305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:12.430 [2024-11-28 05:08:41.556314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.784 ms 00:19:12.430 [2024-11-28 05:08:41.556321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.430 [2024-11-28 05:08:41.558299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.430 [2024-11-28 05:08:41.558345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:12.430 [2024-11-28 05:08:41.558356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.935 ms 00:19:12.430 [2024-11-28 05:08:41.558362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.430 [2024-11-28 05:08:41.560762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.430 [2024-11-28 05:08:41.560810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:12.430 [2024-11-28 05:08:41.560820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.324 ms 00:19:12.430 [2024-11-28 05:08:41.560827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.430 [2024-11-28 05:08:41.560865] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:12.430 [2024-11-28 05:08:41.560881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.560995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:12.430 [2024-11-28 05:08:41.561158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:12.431 [2024-11-28 05:08:41.561726] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:12.431 [2024-11-28 05:08:41.561735] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d2392ca9-dc0a-4073-94e9-23a6ae314a67 00:19:12.431 [2024-11-28 05:08:41.561744] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:12.431 [2024-11-28 05:08:41.561757] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:12.431 [2024-11-28 05:08:41.561765] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:12.431 [2024-11-28 05:08:41.561773] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:12.431 [2024-11-28 05:08:41.561781] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:12.431 [2024-11-28 05:08:41.561795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:12.431 [2024-11-28 05:08:41.561803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:12.431 [2024-11-28 05:08:41.561809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:12.431 [2024-11-28 05:08:41.561816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:12.431 [2024-11-28 05:08:41.561823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.431 [2024-11-28 05:08:41.561831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:12.431 [2024-11-28 05:08:41.561840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:19:12.431 [2024-11-28 05:08:41.561848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.431 [2024-11-28 05:08:41.563857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.431 [2024-11-28 05:08:41.563903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:12.431 [2024-11-28 05:08:41.563914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.990 ms 00:19:12.431 [2024-11-28 05:08:41.563925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.431 [2024-11-28 05:08:41.564052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.431 [2024-11-28 05:08:41.564062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:12.431 [2024-11-28 05:08:41.564072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:12.431 [2024-11-28 05:08:41.564079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.431 [2024-11-28 05:08:41.571749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.431 [2024-11-28 05:08:41.571801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.431 [2024-11-28 05:08:41.571822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.431 [2024-11-28 05:08:41.571829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.431 [2024-11-28 05:08:41.571889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.431 [2024-11-28 05:08:41.571897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.431 [2024-11-28 05:08:41.571905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.432 [2024-11-28 05:08:41.571912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.432 [2024-11-28 05:08:41.571965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.432 [2024-11-28 05:08:41.571976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.432 [2024-11-28 05:08:41.571987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.432 [2024-11-28 05:08:41.571994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.432 [2024-11-28 05:08:41.572019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.432 [2024-11-28 05:08:41.572031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.432 [2024-11-28 05:08:41.572038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.432 [2024-11-28 05:08:41.572046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.432 [2024-11-28 05:08:41.585257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.432 [2024-11-28 05:08:41.585306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.432 [2024-11-28 05:08:41.585316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.432 [2024-11-28 05:08:41.585330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.432 [2024-11-28 05:08:41.594946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.432 [2024-11-28 05:08:41.594995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.432 [2024-11-28 05:08:41.595013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.432 [2024-11-28 05:08:41.595021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.432 [2024-11-28 05:08:41.595053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.432 [2024-11-28 05:08:41.595062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.432 [2024-11-28 05:08:41.595071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.432 [2024-11-28 05:08:41.595082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.432 [2024-11-28 05:08:41.595121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.432 [2024-11-28 05:08:41.595130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.432 [2024-11-28 05:08:41.595138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.432 [2024-11-28 05:08:41.595145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.432 [2024-11-28 05:08:41.595285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.432 [2024-11-28 05:08:41.595296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.432 [2024-11-28 05:08:41.595312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.432 [2024-11-28 05:08:41.595323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.432 [2024-11-28 05:08:41.595357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.432 [2024-11-28 05:08:41.595369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:12.432 [2024-11-28 05:08:41.595378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.432 [2024-11-28 05:08:41.595386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.432 [2024-11-28 05:08:41.595445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.432 [2024-11-28 05:08:41.595454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.432 [2024-11-28 05:08:41.595462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.432 [2024-11-28 05:08:41.595474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.432 [2024-11-28 05:08:41.595522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.432 [2024-11-28 05:08:41.595532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.432 [2024-11-28 05:08:41.595542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.432 [2024-11-28 05:08:41.595549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.432 [2024-11-28 05:08:41.595696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.573 ms, result 0 00:19:12.694 00:19:12.694 00:19:12.694 05:08:41 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:12.694 05:08:41 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=87735 00:19:12.694 05:08:41 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 87735 00:19:12.694 05:08:41 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87735 ']' 00:19:12.694 05:08:41 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.694 05:08:41 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.694 05:08:41 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.694 05:08:41 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.694 05:08:41 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:12.694 [2024-11-28 05:08:41.862537] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:12.694 [2024-11-28 05:08:41.862696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87735 ] 00:19:12.954 [2024-11-28 05:08:42.010568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.954 [2024-11-28 05:08:42.039119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.525 05:08:42 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.525 05:08:42 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:13.525 05:08:42 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:13.785 [2024-11-28 05:08:42.950445] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:13.785 [2024-11-28 05:08:42.950534] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:14.047 [2024-11-28 05:08:43.127839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.047 [2024-11-28 05:08:43.127904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:14.047 [2024-11-28 05:08:43.127923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:14.047 [2024-11-28 05:08:43.127934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.047 [2024-11-28 05:08:43.134617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.047 [2024-11-28 05:08:43.134743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.047 [2024-11-28 05:08:43.134777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.656 ms 00:19:14.047 [2024-11-28 05:08:43.134807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.047 [2024-11-28 05:08:43.135167] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:14.047 [2024-11-28 05:08:43.135966] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:14.047 [2024-11-28 05:08:43.136038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.047 [2024-11-28 05:08:43.136068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.047 [2024-11-28 05:08:43.136096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:19:14.047 [2024-11-28 05:08:43.136123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.047 [2024-11-28 05:08:43.139534] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:14.047 [2024-11-28 05:08:43.144112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.047 [2024-11-28 05:08:43.144166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:14.047 [2024-11-28 05:08:43.144195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.582 ms 00:19:14.048 [2024-11-28 05:08:43.144205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.048 [2024-11-28 05:08:43.144300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.048 [2024-11-28 05:08:43.144313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:14.048 [2024-11-28 05:08:43.144329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:14.048 [2024-11-28 05:08:43.144337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.048 [2024-11-28 05:08:43.152043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.048 [2024-11-28 05:08:43.152083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.048 [2024-11-28 05:08:43.152104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.649 ms 00:19:14.048 [2024-11-28 05:08:43.152112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.048 [2024-11-28 05:08:43.152265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.048 [2024-11-28 05:08:43.152282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.048 [2024-11-28 05:08:43.152297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:14.048 [2024-11-28 05:08:43.152305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.048 [2024-11-28 05:08:43.152340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.048 [2024-11-28 05:08:43.152353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:14.048 [2024-11-28 05:08:43.152363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:14.048 [2024-11-28 05:08:43.152370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.048 [2024-11-28 05:08:43.152398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:14.048 [2024-11-28 05:08:43.154453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.048 [2024-11-28 05:08:43.154502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.048 [2024-11-28 05:08:43.154514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:19:14.048 [2024-11-28 05:08:43.154523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.048 [2024-11-28 05:08:43.154564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.048 [2024-11-28 05:08:43.154575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:14.048 [2024-11-28 05:08:43.154583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:14.048 [2024-11-28 05:08:43.154593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.048 [2024-11-28 05:08:43.154613] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:14.048 [2024-11-28 05:08:43.154639] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:14.048 [2024-11-28 05:08:43.154680] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:14.048 [2024-11-28 05:08:43.154703] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:14.048 [2024-11-28 05:08:43.154808] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:14.048 [2024-11-28 05:08:43.154823] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:14.048 [2024-11-28 05:08:43.154834] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:14.048 [2024-11-28 05:08:43.154847] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:14.048 [2024-11-28 05:08:43.154856] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:14.048 [2024-11-28 05:08:43.154871] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:14.048 [2024-11-28 05:08:43.154879] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:14.048 [2024-11-28 05:08:43.154892] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:14.048 [2024-11-28 05:08:43.154900] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:14.048 [2024-11-28 05:08:43.154911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.048 [2024-11-28 05:08:43.154923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:14.048 [2024-11-28 05:08:43.154933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:19:14.048 [2024-11-28 05:08:43.154943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.048 [2024-11-28 05:08:43.155033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.048 [2024-11-28 05:08:43.155080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:14.048 [2024-11-28 05:08:43.155090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:14.048 [2024-11-28 05:08:43.155099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.048 [2024-11-28 05:08:43.155223] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:14.048 [2024-11-28 05:08:43.155246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:14.048 [2024-11-28 05:08:43.155258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.048 [2024-11-28 05:08:43.155267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.048 [2024-11-28 05:08:43.155282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:14.048 [2024-11-28 05:08:43.155290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:14.048 [2024-11-28 05:08:43.155300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:14.048 [2024-11-28 05:08:43.155308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:14.048 [2024-11-28 05:08:43.155319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:14.048 [2024-11-28 05:08:43.155327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.048 [2024-11-28 05:08:43.155337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:14.048 [2024-11-28 05:08:43.155345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:14.048 [2024-11-28 05:08:43.155355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:14.048 [2024-11-28 05:08:43.155363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:14.048 [2024-11-28 05:08:43.155373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:14.048 [2024-11-28 05:08:43.155382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.048 [2024-11-28 05:08:43.155392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:14.048 [2024-11-28 05:08:43.155399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:14.048 [2024-11-28 05:08:43.155410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.048 [2024-11-28 05:08:43.155419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:14.048 [2024-11-28 05:08:43.155432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:14.048 [2024-11-28 05:08:43.155442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.048 [2024-11-28 05:08:43.155452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:14.048 [2024-11-28 05:08:43.155460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:14.048 [2024-11-28 05:08:43.155470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.048 [2024-11-28 05:08:43.155479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:14.048 [2024-11-28 05:08:43.155489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:14.048 [2024-11-28 05:08:43.155497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.048 [2024-11-28 05:08:43.155506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:14.048 [2024-11-28 05:08:43.155513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:14.048 [2024-11-28 05:08:43.155521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:14.048 [2024-11-28 05:08:43.155529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:14.048 [2024-11-28 05:08:43.155540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:14.048 [2024-11-28 05:08:43.155547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.048 [2024-11-28 05:08:43.155555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:14.048 [2024-11-28 05:08:43.155562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:14.049 [2024-11-28 05:08:43.155574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:14.049 [2024-11-28 05:08:43.155581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:14.049 [2024-11-28 05:08:43.155589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:14.049 [2024-11-28 05:08:43.155595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.049 [2024-11-28 05:08:43.155604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:14.049 [2024-11-28 05:08:43.155610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:14.049 [2024-11-28 05:08:43.155619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.049 [2024-11-28 05:08:43.155625] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:14.049 [2024-11-28 05:08:43.155636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:14.049 [2024-11-28 05:08:43.155643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:14.049 [2024-11-28 05:08:43.155652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:14.049 [2024-11-28 05:08:43.155659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:14.049 [2024-11-28 05:08:43.155668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:14.049 [2024-11-28 05:08:43.155674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:14.049 [2024-11-28 05:08:43.155683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:14.049 [2024-11-28 05:08:43.155690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:14.049 [2024-11-28 05:08:43.155701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:14.049 [2024-11-28 05:08:43.155711] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:14.049 [2024-11-28 05:08:43.155723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.049 [2024-11-28 05:08:43.155734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:14.049 [2024-11-28 05:08:43.155745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:14.049 [2024-11-28 05:08:43.155752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:14.049 [2024-11-28 05:08:43.155762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:14.049 [2024-11-28 05:08:43.155770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:14.049 [2024-11-28 05:08:43.155780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:14.049 [2024-11-28 05:08:43.155787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:14.049 [2024-11-28 05:08:43.155796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:14.049 [2024-11-28 05:08:43.155804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:14.049 [2024-11-28 05:08:43.155814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:14.049 [2024-11-28 05:08:43.155821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:14.049 [2024-11-28 05:08:43.155836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:14.049 [2024-11-28 05:08:43.155844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:14.049 [2024-11-28 05:08:43.155855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:14.049 [2024-11-28 05:08:43.155862] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:14.049 [2024-11-28 05:08:43.155872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:14.049 [2024-11-28 05:08:43.155883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:14.049 [2024-11-28 05:08:43.155892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:14.049 [2024-11-28 05:08:43.155899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:14.049 [2024-11-28 05:08:43.155909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:14.049 [2024-11-28 05:08:43.155917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.049 [2024-11-28 05:08:43.155926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:14.049 [2024-11-28 05:08:43.155933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:19:14.049 [2024-11-28 05:08:43.155942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.049 [2024-11-28 05:08:43.170798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.049 [2024-11-28 05:08:43.170845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.049 [2024-11-28 05:08:43.170857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.797 ms 00:19:14.049 [2024-11-28 05:08:43.170873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.049 [2024-11-28 05:08:43.171004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.049 [2024-11-28 05:08:43.171023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:14.049 [2024-11-28 05:08:43.171033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:14.049 [2024-11-28 05:08:43.171043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.049 [2024-11-28 05:08:43.184362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.049 [2024-11-28 05:08:43.184409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.049 [2024-11-28 05:08:43.184420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.297 ms 00:19:14.049 [2024-11-28 05:08:43.184438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.049 [2024-11-28 05:08:43.184506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.049 [2024-11-28 05:08:43.184517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.049 [2024-11-28 05:08:43.184527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:14.049 [2024-11-28 05:08:43.184536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.049 [2024-11-28 05:08:43.185041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.049 [2024-11-28 05:08:43.185089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.049 [2024-11-28 05:08:43.185101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:19:14.049 [2024-11-28 05:08:43.185116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.049 [2024-11-28 05:08:43.185288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.049 [2024-11-28 05:08:43.185307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.049 [2024-11-28 05:08:43.185317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:19:14.049 [2024-11-28 05:08:43.185331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.049 [2024-11-28 05:08:43.193540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.049 [2024-11-28 05:08:43.193587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.049 [2024-11-28 05:08:43.193598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.184 ms 00:19:14.049 [2024-11-28 05:08:43.193608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.049 [2024-11-28 05:08:43.205077] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:14.049 [2024-11-28 05:08:43.205143] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:14.049 [2024-11-28 05:08:43.205160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.049 [2024-11-28 05:08:43.205173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:14.049 [2024-11-28 05:08:43.205203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.426 ms 00:19:14.049 [2024-11-28 05:08:43.205216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.049 [2024-11-28 05:08:43.221863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.049 [2024-11-28 05:08:43.221918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:14.050 [2024-11-28 05:08:43.221931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.581 ms 00:19:14.050 [2024-11-28 05:08:43.221944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.050 [2024-11-28 05:08:43.224918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.050 [2024-11-28 05:08:43.224974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:14.050 [2024-11-28 05:08:43.224984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.855 ms 00:19:14.050 [2024-11-28 05:08:43.224995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.050 [2024-11-28 05:08:43.227495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.050 [2024-11-28 05:08:43.227545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:14.050 [2024-11-28 05:08:43.227556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.445 ms 00:19:14.050 [2024-11-28 05:08:43.227565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.050 [2024-11-28 05:08:43.227909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.050 [2024-11-28 05:08:43.227940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:14.050 [2024-11-28 05:08:43.227950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:14.050 [2024-11-28 05:08:43.227961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.050 [2024-11-28 05:08:43.252005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.050 [2024-11-28 05:08:43.252078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:14.050 [2024-11-28 05:08:43.252091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.019 ms 00:19:14.050 [2024-11-28 05:08:43.252105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.050 [2024-11-28 05:08:43.261106] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:14.050 [2024-11-28 05:08:43.280205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.050 [2024-11-28 05:08:43.280259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:14.050 [2024-11-28 05:08:43.280275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.976 ms 00:19:14.050 [2024-11-28 05:08:43.280284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.050 [2024-11-28 05:08:43.280379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.050 [2024-11-28 05:08:43.280391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:14.050 [2024-11-28 05:08:43.280404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:14.050 [2024-11-28 05:08:43.280413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.050 [2024-11-28 05:08:43.280478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.050 [2024-11-28 05:08:43.280489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:14.050 [2024-11-28 05:08:43.280500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:14.050 [2024-11-28 05:08:43.280508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.050 [2024-11-28 05:08:43.280537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.050 [2024-11-28 05:08:43.280552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:14.050 [2024-11-28 05:08:43.280564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:14.050 [2024-11-28 05:08:43.280572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.050 [2024-11-28 05:08:43.280610] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:14.050 [2024-11-28 05:08:43.280619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.050 [2024-11-28 05:08:43.280629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:14.050 [2024-11-28 05:08:43.280637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:14.050 [2024-11-28 05:08:43.280649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.050 [2024-11-28 05:08:43.286635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.050 [2024-11-28 05:08:43.286695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:14.050 [2024-11-28 05:08:43.286709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.962 ms 00:19:14.050 [2024-11-28 05:08:43.286719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.050 [2024-11-28 05:08:43.286817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.050 [2024-11-28 05:08:43.286831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:14.050 [2024-11-28 05:08:43.286841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:14.050 [2024-11-28 05:08:43.286851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.050 [2024-11-28 05:08:43.288978] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:14.050 [2024-11-28 05:08:43.290392] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 160.807 ms, result 0 00:19:14.050 [2024-11-28 05:08:43.292875] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:14.050 Some configs were skipped because the RPC state that can call them passed over. 00:19:14.050 05:08:43 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:14.311 [2024-11-28 05:08:43.530337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.311 [2024-11-28 05:08:43.530395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:14.311 [2024-11-28 05:08:43.530417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.301 ms 00:19:14.311 [2024-11-28 05:08:43.530426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.311 [2024-11-28 05:08:43.530465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.438 ms, result 0 00:19:14.311 true 00:19:14.311 05:08:43 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:14.572 [2024-11-28 05:08:43.746022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.572 [2024-11-28 05:08:43.746083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:14.572 [2024-11-28 05:08:43.746096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.728 ms 00:19:14.572 [2024-11-28 05:08:43.746108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.572 [2024-11-28 05:08:43.746144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.851 ms, result 0 00:19:14.572 true 00:19:14.572 05:08:43 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 87735 00:19:14.572 05:08:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87735 ']' 00:19:14.572 05:08:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87735 00:19:14.572 05:08:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:14.572 05:08:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.572 05:08:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87735 00:19:14.572 killing process with pid 87735 00:19:14.572 05:08:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.572 05:08:43 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.573 05:08:43 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87735' 00:19:14.573 05:08:43 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87735 00:19:14.573 05:08:43 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87735 00:19:14.835 [2024-11-28 05:08:43.920608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.835 [2024-11-28 05:08:43.920674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:14.835 [2024-11-28 05:08:43.920690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:14.835 [2024-11-28 05:08:43.920698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.835 [2024-11-28 05:08:43.920727] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:14.835 [2024-11-28 05:08:43.921368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.835 [2024-11-28 05:08:43.921402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:14.835 [2024-11-28 05:08:43.921413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:19:14.835 [2024-11-28 05:08:43.921422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.835 [2024-11-28 05:08:43.921750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.835 [2024-11-28 05:08:43.921809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:14.835 [2024-11-28 05:08:43.921820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:19:14.835 [2024-11-28 05:08:43.921832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.835 [2024-11-28 05:08:43.926050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.835 [2024-11-28 05:08:43.926092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:14.835 [2024-11-28 05:08:43.926104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.194 ms 00:19:14.835 [2024-11-28 05:08:43.926114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.835 [2024-11-28 05:08:43.933270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.835 [2024-11-28 05:08:43.933318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:14.835 [2024-11-28 05:08:43.933328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.120 ms 00:19:14.835 [2024-11-28 05:08:43.933339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.835 [2024-11-28 05:08:43.936027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.835 [2024-11-28 05:08:43.936076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:14.835 [2024-11-28 05:08:43.936086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:19:14.835 [2024-11-28 05:08:43.936094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.835 [2024-11-28 05:08:43.941058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.835 [2024-11-28 05:08:43.941110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:14.835 [2024-11-28 05:08:43.941123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.919 ms 00:19:14.835 [2024-11-28 05:08:43.941133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.835 [2024-11-28 05:08:43.941281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.835 [2024-11-28 05:08:43.941295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:14.835 [2024-11-28 05:08:43.941304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:14.835 [2024-11-28 05:08:43.941314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.835 [2024-11-28 05:08:43.944026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.835 [2024-11-28 05:08:43.944075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:14.835 [2024-11-28 05:08:43.944084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.692 ms 00:19:14.835 [2024-11-28 05:08:43.944098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.835 [2024-11-28 05:08:43.945877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.835 [2024-11-28 05:08:43.945923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:14.835 [2024-11-28 05:08:43.945933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:19:14.835 [2024-11-28 05:08:43.945941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.835 [2024-11-28 05:08:43.947498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.835 [2024-11-28 05:08:43.947544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:14.835 [2024-11-28 05:08:43.947553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:19:14.835 [2024-11-28 05:08:43.947562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.835 [2024-11-28 05:08:43.948995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.835 [2024-11-28 05:08:43.949044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:14.835 [2024-11-28 05:08:43.949054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:19:14.835 [2024-11-28 05:08:43.949062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.835 [2024-11-28 05:08:43.949102] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:14.835 [2024-11-28 05:08:43.949118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:14.835 [2024-11-28 05:08:43.949129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:14.835 [2024-11-28 05:08:43.949141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:14.836 [2024-11-28 05:08:43.949956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:14.837 [2024-11-28 05:08:43.949965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:14.837 [2024-11-28 05:08:43.949974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:14.837 [2024-11-28 05:08:43.949985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:14.837 [2024-11-28 05:08:43.949992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:14.837 [2024-11-28 05:08:43.950002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:14.837 [2024-11-28 05:08:43.950010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:14.837 [2024-11-28 05:08:43.950021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:14.837 [2024-11-28 05:08:43.950029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:14.837 [2024-11-28 05:08:43.950047] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:14.837 [2024-11-28 05:08:43.950063] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d2392ca9-dc0a-4073-94e9-23a6ae314a67 00:19:14.837 [2024-11-28 05:08:43.950073] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:14.837 [2024-11-28 05:08:43.950081] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:14.837 [2024-11-28 05:08:43.950091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:14.837 [2024-11-28 05:08:43.950099] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:14.837 [2024-11-28 05:08:43.950111] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:14.837 [2024-11-28 05:08:43.950119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:14.837 [2024-11-28 05:08:43.950129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:14.837 [2024-11-28 05:08:43.950136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:14.837 [2024-11-28 05:08:43.950145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:14.837 [2024-11-28 05:08:43.950152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.837 [2024-11-28 05:08:43.950161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:14.837 [2024-11-28 05:08:43.950170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:19:14.837 [2024-11-28 05:08:43.950195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.952305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.837 [2024-11-28 05:08:43.952343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:14.837 [2024-11-28 05:08:43.952359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:19:14.837 [2024-11-28 05:08:43.952370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.952479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.837 [2024-11-28 05:08:43.952491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:14.837 [2024-11-28 05:08:43.952501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:14.837 [2024-11-28 05:08:43.952514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.959942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.837 [2024-11-28 05:08:43.960000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.837 [2024-11-28 05:08:43.960010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.837 [2024-11-28 05:08:43.960021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.960113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.837 [2024-11-28 05:08:43.960126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.837 [2024-11-28 05:08:43.960134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.837 [2024-11-28 05:08:43.960149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.960222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.837 [2024-11-28 05:08:43.960235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.837 [2024-11-28 05:08:43.960243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.837 [2024-11-28 05:08:43.960253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.960277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.837 [2024-11-28 05:08:43.960295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.837 [2024-11-28 05:08:43.960304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.837 [2024-11-28 05:08:43.960313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.974559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.837 [2024-11-28 05:08:43.974621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.837 [2024-11-28 05:08:43.974632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.837 [2024-11-28 05:08:43.974653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.985816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.837 [2024-11-28 05:08:43.985874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.837 [2024-11-28 05:08:43.985886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.837 [2024-11-28 05:08:43.985903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.985971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.837 [2024-11-28 05:08:43.985985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.837 [2024-11-28 05:08:43.985993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.837 [2024-11-28 05:08:43.986004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.986048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.837 [2024-11-28 05:08:43.986059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.837 [2024-11-28 05:08:43.986067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.837 [2024-11-28 05:08:43.986078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.986158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.837 [2024-11-28 05:08:43.986170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.837 [2024-11-28 05:08:43.986198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.837 [2024-11-28 05:08:43.986209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.986244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.837 [2024-11-28 05:08:43.986256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:14.837 [2024-11-28 05:08:43.986265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.837 [2024-11-28 05:08:43.986277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.986323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.837 [2024-11-28 05:08:43.986336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.837 [2024-11-28 05:08:43.986345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.837 [2024-11-28 05:08:43.986358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.986409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.837 [2024-11-28 05:08:43.986432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.837 [2024-11-28 05:08:43.986443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.837 [2024-11-28 05:08:43.986453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.837 [2024-11-28 05:08:43.986613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.972 ms, result 0 00:19:15.098 05:08:44 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:15.098 [2024-11-28 05:08:44.286642] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:15.098 [2024-11-28 05:08:44.286801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87777 ] 00:19:15.359 [2024-11-28 05:08:44.436027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.359 [2024-11-28 05:08:44.463637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.359 [2024-11-28 05:08:44.579610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:15.359 [2024-11-28 05:08:44.579706] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:15.622 [2024-11-28 05:08:44.742761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.622 [2024-11-28 05:08:44.742823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:15.622 [2024-11-28 05:08:44.742839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:15.623 [2024-11-28 05:08:44.742848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.623 [2024-11-28 05:08:44.745543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.623 [2024-11-28 05:08:44.745596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:15.623 [2024-11-28 05:08:44.745607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.674 ms 00:19:15.623 [2024-11-28 05:08:44.745615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.623 [2024-11-28 05:08:44.745736] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:15.623 [2024-11-28 05:08:44.746114] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:15.623 [2024-11-28 05:08:44.746161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.623 [2024-11-28 05:08:44.746171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:15.623 [2024-11-28 05:08:44.746197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:19:15.623 [2024-11-28 05:08:44.746209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.623 [2024-11-28 05:08:44.747936] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:15.623 [2024-11-28 05:08:44.751690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.623 [2024-11-28 05:08:44.751750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:15.623 [2024-11-28 05:08:44.751764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.756 ms 00:19:15.623 [2024-11-28 05:08:44.751773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.623 [2024-11-28 05:08:44.751867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.623 [2024-11-28 05:08:44.751879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:15.623 [2024-11-28 05:08:44.751888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:15.623 [2024-11-28 05:08:44.751900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.623 [2024-11-28 05:08:44.759774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.623 [2024-11-28 05:08:44.759822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:15.623 [2024-11-28 05:08:44.759832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.823 ms 00:19:15.623 [2024-11-28 05:08:44.759840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.623 [2024-11-28 05:08:44.759978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.623 [2024-11-28 05:08:44.759990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:15.623 [2024-11-28 05:08:44.759999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:15.623 [2024-11-28 05:08:44.760010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.623 [2024-11-28 05:08:44.760037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.623 [2024-11-28 05:08:44.760050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:15.623 [2024-11-28 05:08:44.760059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:15.623 [2024-11-28 05:08:44.760067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.623 [2024-11-28 05:08:44.760089] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:15.623 [2024-11-28 05:08:44.762212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.623 [2024-11-28 05:08:44.762247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:15.623 [2024-11-28 05:08:44.762258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.128 ms 00:19:15.623 [2024-11-28 05:08:44.762270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.623 [2024-11-28 05:08:44.762318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.623 [2024-11-28 05:08:44.762327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:15.623 [2024-11-28 05:08:44.762340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:15.623 [2024-11-28 05:08:44.762348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.623 [2024-11-28 05:08:44.762367] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:15.623 [2024-11-28 05:08:44.762387] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:15.623 [2024-11-28 05:08:44.762433] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:15.623 [2024-11-28 05:08:44.762452] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:15.623 [2024-11-28 05:08:44.762560] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:15.623 [2024-11-28 05:08:44.762570] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:15.623 [2024-11-28 05:08:44.762582] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:15.623 [2024-11-28 05:08:44.762593] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:15.623 [2024-11-28 05:08:44.762603] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:15.623 [2024-11-28 05:08:44.762611] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:15.623 [2024-11-28 05:08:44.762619] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:15.623 [2024-11-28 05:08:44.762626] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:15.623 [2024-11-28 05:08:44.762638] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:15.623 [2024-11-28 05:08:44.762649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.623 [2024-11-28 05:08:44.762657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:15.623 [2024-11-28 05:08:44.762666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:19:15.623 [2024-11-28 05:08:44.762673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.623 [2024-11-28 05:08:44.762764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.623 [2024-11-28 05:08:44.762773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:15.623 [2024-11-28 05:08:44.762781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:15.623 [2024-11-28 05:08:44.762788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.623 [2024-11-28 05:08:44.762889] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:15.623 [2024-11-28 05:08:44.762904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:15.623 [2024-11-28 05:08:44.762913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:15.623 [2024-11-28 05:08:44.762922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.623 [2024-11-28 05:08:44.762931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:15.623 [2024-11-28 05:08:44.762939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:15.623 [2024-11-28 05:08:44.762948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:15.623 [2024-11-28 05:08:44.762958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:15.623 [2024-11-28 05:08:44.762966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:15.623 [2024-11-28 05:08:44.762975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:15.623 [2024-11-28 05:08:44.762982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:15.623 [2024-11-28 05:08:44.762991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:15.623 [2024-11-28 05:08:44.762999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:15.623 [2024-11-28 05:08:44.763007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:15.623 [2024-11-28 05:08:44.763016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:15.623 [2024-11-28 05:08:44.763024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.623 [2024-11-28 05:08:44.763032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:15.623 [2024-11-28 05:08:44.763041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:15.623 [2024-11-28 05:08:44.763048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.623 [2024-11-28 05:08:44.763056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:15.623 [2024-11-28 05:08:44.763064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:15.623 [2024-11-28 05:08:44.763073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.623 [2024-11-28 05:08:44.763081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:15.623 [2024-11-28 05:08:44.763093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:15.623 [2024-11-28 05:08:44.763101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.623 [2024-11-28 05:08:44.763108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:15.623 [2024-11-28 05:08:44.763116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:15.623 [2024-11-28 05:08:44.763124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.623 [2024-11-28 05:08:44.763132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:15.623 [2024-11-28 05:08:44.763140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:15.623 [2024-11-28 05:08:44.763147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.623 [2024-11-28 05:08:44.763155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:15.623 [2024-11-28 05:08:44.763162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:15.623 [2024-11-28 05:08:44.763170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:15.624 [2024-11-28 05:08:44.763193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:15.624 [2024-11-28 05:08:44.763201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:15.624 [2024-11-28 05:08:44.763208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:15.624 [2024-11-28 05:08:44.763217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:15.624 [2024-11-28 05:08:44.763225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:15.624 [2024-11-28 05:08:44.763236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.624 [2024-11-28 05:08:44.763244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:15.624 [2024-11-28 05:08:44.763251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:15.624 [2024-11-28 05:08:44.763259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.624 [2024-11-28 05:08:44.763270] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:15.624 [2024-11-28 05:08:44.763285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:15.624 [2024-11-28 05:08:44.763295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:15.624 [2024-11-28 05:08:44.763304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.624 [2024-11-28 05:08:44.763316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:15.624 [2024-11-28 05:08:44.763323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:15.624 [2024-11-28 05:08:44.763330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:15.624 [2024-11-28 05:08:44.763337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:15.624 [2024-11-28 05:08:44.763344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:15.624 [2024-11-28 05:08:44.763351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:15.624 [2024-11-28 05:08:44.763360] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:15.624 [2024-11-28 05:08:44.763370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:15.624 [2024-11-28 05:08:44.763380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:15.624 [2024-11-28 05:08:44.763387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:15.624 [2024-11-28 05:08:44.763394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:15.624 [2024-11-28 05:08:44.763402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:15.624 [2024-11-28 05:08:44.763409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:15.624 [2024-11-28 05:08:44.763416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:15.624 [2024-11-28 05:08:44.763424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:15.624 [2024-11-28 05:08:44.763438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:15.624 [2024-11-28 05:08:44.763444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:15.624 [2024-11-28 05:08:44.763452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:15.624 [2024-11-28 05:08:44.763458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:15.624 [2024-11-28 05:08:44.763466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:15.624 [2024-11-28 05:08:44.763473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:15.624 [2024-11-28 05:08:44.763480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:15.624 [2024-11-28 05:08:44.763488] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:15.624 [2024-11-28 05:08:44.763499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:15.624 [2024-11-28 05:08:44.763510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:15.624 [2024-11-28 05:08:44.763518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:15.624 [2024-11-28 05:08:44.763526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:15.624 [2024-11-28 05:08:44.763535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:15.624 [2024-11-28 05:08:44.763543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.763552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:15.624 [2024-11-28 05:08:44.763559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:19:15.624 [2024-11-28 05:08:44.763567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.777322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.777364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:15.624 [2024-11-28 05:08:44.777376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.704 ms 00:19:15.624 [2024-11-28 05:08:44.777384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.777517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.777534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:15.624 [2024-11-28 05:08:44.777543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:15.624 [2024-11-28 05:08:44.777551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.800484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.800543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:15.624 [2024-11-28 05:08:44.800559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.909 ms 00:19:15.624 [2024-11-28 05:08:44.800569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.800680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.800696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:15.624 [2024-11-28 05:08:44.800708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:15.624 [2024-11-28 05:08:44.800718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.801308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.801350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:15.624 [2024-11-28 05:08:44.801364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:19:15.624 [2024-11-28 05:08:44.801375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.801551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.801575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:15.624 [2024-11-28 05:08:44.801587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:19:15.624 [2024-11-28 05:08:44.801598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.809802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.809844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:15.624 [2024-11-28 05:08:44.809865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.174 ms 00:19:15.624 [2024-11-28 05:08:44.809873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.813637] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:15.624 [2024-11-28 05:08:44.813717] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:15.624 [2024-11-28 05:08:44.813731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.813741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:15.624 [2024-11-28 05:08:44.813750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.748 ms 00:19:15.624 [2024-11-28 05:08:44.813757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.829211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.829261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:15.624 [2024-11-28 05:08:44.829273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.392 ms 00:19:15.624 [2024-11-28 05:08:44.829282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.832013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.832059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:15.624 [2024-11-28 05:08:44.832069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:19:15.624 [2024-11-28 05:08:44.832077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.834510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.834552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:15.624 [2024-11-28 05:08:44.834560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.371 ms 00:19:15.624 [2024-11-28 05:08:44.834568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.834942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.834963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:15.624 [2024-11-28 05:08:44.834973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:15.624 [2024-11-28 05:08:44.834982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.624 [2024-11-28 05:08:44.858366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.624 [2024-11-28 05:08:44.858427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:15.624 [2024-11-28 05:08:44.858441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.361 ms 00:19:15.625 [2024-11-28 05:08:44.858450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.625 [2024-11-28 05:08:44.866538] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:15.625 [2024-11-28 05:08:44.885492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.625 [2024-11-28 05:08:44.885554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:15.625 [2024-11-28 05:08:44.885567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.957 ms 00:19:15.625 [2024-11-28 05:08:44.885577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.625 [2024-11-28 05:08:44.885671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.625 [2024-11-28 05:08:44.885708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:15.625 [2024-11-28 05:08:44.885721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:15.625 [2024-11-28 05:08:44.885730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.625 [2024-11-28 05:08:44.885789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.625 [2024-11-28 05:08:44.885800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:15.625 [2024-11-28 05:08:44.885809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:15.625 [2024-11-28 05:08:44.885818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.625 [2024-11-28 05:08:44.885844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.625 [2024-11-28 05:08:44.885854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:15.625 [2024-11-28 05:08:44.885862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:15.625 [2024-11-28 05:08:44.885874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.625 [2024-11-28 05:08:44.885911] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:15.625 [2024-11-28 05:08:44.885922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.625 [2024-11-28 05:08:44.885931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:15.625 [2024-11-28 05:08:44.885939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:15.625 [2024-11-28 05:08:44.885947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.625 [2024-11-28 05:08:44.891939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.625 [2024-11-28 05:08:44.891990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:15.625 [2024-11-28 05:08:44.892002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.969 ms 00:19:15.625 [2024-11-28 05:08:44.892010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.625 [2024-11-28 05:08:44.892109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.625 [2024-11-28 05:08:44.892125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:15.625 [2024-11-28 05:08:44.892136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:15.625 [2024-11-28 05:08:44.892144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.625 [2024-11-28 05:08:44.893305] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:15.625 [2024-11-28 05:08:44.894643] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 150.185 ms, result 0 00:19:15.625 [2024-11-28 05:08:44.895916] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:15.625 [2024-11-28 05:08:44.903329] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:17.015  [2024-11-28T05:08:47.243Z] Copying: 19/256 [MB] (19 MBps) [2024-11-28T05:08:48.189Z] Copying: 34/256 [MB] (14 MBps) [2024-11-28T05:08:49.134Z] Copying: 52/256 [MB] (18 MBps) [2024-11-28T05:08:50.079Z] Copying: 63/256 [MB] (10 MBps) [2024-11-28T05:08:51.024Z] Copying: 74/256 [MB] (11 MBps) [2024-11-28T05:08:51.969Z] Copying: 94/256 [MB] (20 MBps) [2024-11-28T05:08:52.997Z] Copying: 109/256 [MB] (15 MBps) [2024-11-28T05:08:54.383Z] Copying: 125/256 [MB] (15 MBps) [2024-11-28T05:08:55.326Z] Copying: 143/256 [MB] (18 MBps) [2024-11-28T05:08:56.270Z] Copying: 162/256 [MB] (18 MBps) [2024-11-28T05:08:57.215Z] Copying: 182/256 [MB] (20 MBps) [2024-11-28T05:08:58.160Z] Copying: 200/256 [MB] (17 MBps) [2024-11-28T05:08:59.106Z] Copying: 221/256 [MB] (20 MBps) [2024-11-28T05:09:00.047Z] Copying: 240/256 [MB] (19 MBps) [2024-11-28T05:09:00.311Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-28 05:09:00.052553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:31.027 [2024-11-28 05:09:00.055657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.027 [2024-11-28 05:09:00.055753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:31.027 [2024-11-28 05:09:00.055799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:31.027 [2024-11-28 05:09:00.055823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.027 [2024-11-28 05:09:00.055880] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:31.027 [2024-11-28 05:09:00.056977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.027 [2024-11-28 05:09:00.057052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:31.027 [2024-11-28 05:09:00.057077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:19:31.027 [2024-11-28 05:09:00.057097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.027 [2024-11-28 05:09:00.057878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.027 [2024-11-28 05:09:00.057945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:31.027 [2024-11-28 05:09:00.057976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:19:31.027 [2024-11-28 05:09:00.057996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.027 [2024-11-28 05:09:00.064489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.027 [2024-11-28 05:09:00.064517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:31.027 [2024-11-28 05:09:00.064528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.452 ms 00:19:31.027 [2024-11-28 05:09:00.064538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.027 [2024-11-28 05:09:00.071513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.027 [2024-11-28 05:09:00.071558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:31.027 [2024-11-28 05:09:00.071570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.932 ms 00:19:31.027 [2024-11-28 05:09:00.071586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.027 [2024-11-28 05:09:00.074755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.027 [2024-11-28 05:09:00.074809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:31.027 [2024-11-28 05:09:00.074820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:19:31.027 [2024-11-28 05:09:00.074828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.027 [2024-11-28 05:09:00.079795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.027 [2024-11-28 05:09:00.079850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:31.027 [2024-11-28 05:09:00.079863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.897 ms 00:19:31.027 [2024-11-28 05:09:00.079872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.027 [2024-11-28 05:09:00.080013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.027 [2024-11-28 05:09:00.080026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:31.027 [2024-11-28 05:09:00.080043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:31.027 [2024-11-28 05:09:00.080051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.027 [2024-11-28 05:09:00.083298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.027 [2024-11-28 05:09:00.083348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:31.027 [2024-11-28 05:09:00.083358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:19:31.027 [2024-11-28 05:09:00.083366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.027 [2024-11-28 05:09:00.086454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.027 [2024-11-28 05:09:00.086505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:31.027 [2024-11-28 05:09:00.086514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.039 ms 00:19:31.027 [2024-11-28 05:09:00.086522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.027 [2024-11-28 05:09:00.088851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.027 [2024-11-28 05:09:00.088918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:31.027 [2024-11-28 05:09:00.088928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.283 ms 00:19:31.027 [2024-11-28 05:09:00.088935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.027 [2024-11-28 05:09:00.091464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.027 [2024-11-28 05:09:00.091516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:31.027 [2024-11-28 05:09:00.091526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:19:31.027 [2024-11-28 05:09:00.091534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.027 [2024-11-28 05:09:00.091577] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:31.027 [2024-11-28 05:09:00.091593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:31.027 [2024-11-28 05:09:00.091605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:31.027 [2024-11-28 05:09:00.091613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:31.027 [2024-11-28 05:09:00.091625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:31.027 [2024-11-28 05:09:00.091635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:31.027 [2024-11-28 05:09:00.091643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:31.027 [2024-11-28 05:09:00.091652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:31.027 [2024-11-28 05:09:00.091662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:31.027 [2024-11-28 05:09:00.091670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.091999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:31.028 [2024-11-28 05:09:00.092419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:31.029 [2024-11-28 05:09:00.092428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:31.029 [2024-11-28 05:09:00.092436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:31.029 [2024-11-28 05:09:00.092446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:31.029 [2024-11-28 05:09:00.092463] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:31.029 [2024-11-28 05:09:00.092471] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d2392ca9-dc0a-4073-94e9-23a6ae314a67 00:19:31.029 [2024-11-28 05:09:00.092479] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:31.029 [2024-11-28 05:09:00.092489] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:31.029 [2024-11-28 05:09:00.092497] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:31.029 [2024-11-28 05:09:00.092505] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:31.029 [2024-11-28 05:09:00.092513] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:31.029 [2024-11-28 05:09:00.092525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:31.029 [2024-11-28 05:09:00.092534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:31.029 [2024-11-28 05:09:00.092541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:31.029 [2024-11-28 05:09:00.092547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:31.029 [2024-11-28 05:09:00.092554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.029 [2024-11-28 05:09:00.092562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:31.029 [2024-11-28 05:09:00.092573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:19:31.029 [2024-11-28 05:09:00.092580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.094928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.029 [2024-11-28 05:09:00.094965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:31.029 [2024-11-28 05:09:00.094976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.329 ms 00:19:31.029 [2024-11-28 05:09:00.094993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.095112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.029 [2024-11-28 05:09:00.095123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:31.029 [2024-11-28 05:09:00.095132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:31.029 [2024-11-28 05:09:00.095147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.103257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.029 [2024-11-28 05:09:00.103305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.029 [2024-11-28 05:09:00.103324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.029 [2024-11-28 05:09:00.103335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.103424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.029 [2024-11-28 05:09:00.103436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.029 [2024-11-28 05:09:00.103446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.029 [2024-11-28 05:09:00.103455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.103503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.029 [2024-11-28 05:09:00.103513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.029 [2024-11-28 05:09:00.103522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.029 [2024-11-28 05:09:00.103530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.103553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.029 [2024-11-28 05:09:00.103561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.029 [2024-11-28 05:09:00.103570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.029 [2024-11-28 05:09:00.103579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.116950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.029 [2024-11-28 05:09:00.117004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.029 [2024-11-28 05:09:00.117016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.029 [2024-11-28 05:09:00.117032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.126977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.029 [2024-11-28 05:09:00.127028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.029 [2024-11-28 05:09:00.127039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.029 [2024-11-28 05:09:00.127047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.127098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.029 [2024-11-28 05:09:00.127118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.029 [2024-11-28 05:09:00.127127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.029 [2024-11-28 05:09:00.127135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.127168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.029 [2024-11-28 05:09:00.127269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.029 [2024-11-28 05:09:00.127278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.029 [2024-11-28 05:09:00.127286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.127359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.029 [2024-11-28 05:09:00.127370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.029 [2024-11-28 05:09:00.127387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.029 [2024-11-28 05:09:00.127394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.127429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.029 [2024-11-28 05:09:00.127442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:31.029 [2024-11-28 05:09:00.127451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.029 [2024-11-28 05:09:00.127459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.127501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.029 [2024-11-28 05:09:00.127511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.029 [2024-11-28 05:09:00.127520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.029 [2024-11-28 05:09:00.127528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.127571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.029 [2024-11-28 05:09:00.127594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.029 [2024-11-28 05:09:00.127604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.029 [2024-11-28 05:09:00.127613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.029 [2024-11-28 05:09:00.127766] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.106 ms, result 0 00:19:31.289 00:19:31.289 00:19:31.289 05:09:00 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:31.551 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:31.551 05:09:00 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:31.551 05:09:00 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:31.551 05:09:00 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:31.551 05:09:00 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:31.551 05:09:00 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:31.551 05:09:00 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:31.551 05:09:00 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 87735 00:19:31.551 05:09:00 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87735 ']' 00:19:31.551 Process with pid 87735 is not found 00:19:31.551 05:09:00 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87735 00:19:31.551 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87735) - No such process 00:19:31.551 05:09:00 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 87735 is not found' 00:19:31.551 00:19:31.551 real 1m5.099s 00:19:31.551 user 1m28.741s 00:19:31.551 sys 0m5.346s 00:19:31.551 05:09:00 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.551 ************************************ 00:19:31.551 END TEST ftl_trim 00:19:31.551 ************************************ 00:19:31.551 05:09:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:31.811 05:09:00 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:31.811 05:09:00 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:31.811 05:09:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.811 05:09:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:31.811 ************************************ 00:19:31.811 START TEST ftl_restore 00:19:31.811 ************************************ 00:19:31.811 05:09:00 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:31.811 * Looking for test storage... 00:19:31.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:31.811 05:09:00 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:31.811 05:09:00 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:19:31.811 05:09:00 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:31.811 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.811 05:09:01 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:31.812 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.812 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:31.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.812 --rc genhtml_branch_coverage=1 00:19:31.812 --rc genhtml_function_coverage=1 00:19:31.812 --rc genhtml_legend=1 00:19:31.812 --rc geninfo_all_blocks=1 00:19:31.812 --rc geninfo_unexecuted_blocks=1 00:19:31.812 00:19:31.812 ' 00:19:31.812 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:31.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.812 --rc genhtml_branch_coverage=1 00:19:31.812 --rc genhtml_function_coverage=1 00:19:31.812 --rc genhtml_legend=1 00:19:31.812 --rc geninfo_all_blocks=1 00:19:31.812 --rc geninfo_unexecuted_blocks=1 00:19:31.812 00:19:31.812 ' 00:19:31.812 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:31.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.812 --rc genhtml_branch_coverage=1 00:19:31.812 --rc genhtml_function_coverage=1 00:19:31.812 --rc genhtml_legend=1 00:19:31.812 --rc geninfo_all_blocks=1 00:19:31.812 --rc geninfo_unexecuted_blocks=1 00:19:31.812 00:19:31.812 ' 00:19:31.812 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:31.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.812 --rc genhtml_branch_coverage=1 00:19:31.812 --rc genhtml_function_coverage=1 00:19:31.812 --rc genhtml_legend=1 00:19:31.812 --rc geninfo_all_blocks=1 00:19:31.812 --rc geninfo_unexecuted_blocks=1 00:19:31.812 00:19:31.812 ' 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.bnFbBch2X5 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=88014 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 88014 00:19:31.812 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 88014 ']' 00:19:31.812 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.812 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.812 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.812 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.812 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:31.812 05:09:01 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:32.073 [2024-11-28 05:09:01.107272] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:32.073 [2024-11-28 05:09:01.107367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88014 ] 00:19:32.073 [2024-11-28 05:09:01.243344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.073 [2024-11-28 05:09:01.269403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.015 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.015 05:09:01 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:19:33.015 05:09:01 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:33.015 05:09:01 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:33.015 05:09:01 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:33.015 05:09:01 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:33.015 05:09:01 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:33.015 05:09:01 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:33.015 05:09:02 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:33.015 05:09:02 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:33.015 05:09:02 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:33.015 05:09:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:33.015 05:09:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:33.015 05:09:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:33.015 05:09:02 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:33.015 05:09:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:33.276 05:09:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:33.276 { 00:19:33.276 "name": "nvme0n1", 00:19:33.276 "aliases": [ 00:19:33.276 "fe05edde-8dc5-4053-a99c-9dcb2bfe14da" 00:19:33.276 ], 00:19:33.276 "product_name": "NVMe disk", 00:19:33.276 "block_size": 4096, 00:19:33.276 "num_blocks": 1310720, 00:19:33.276 "uuid": "fe05edde-8dc5-4053-a99c-9dcb2bfe14da", 00:19:33.276 "numa_id": -1, 00:19:33.276 "assigned_rate_limits": { 00:19:33.276 "rw_ios_per_sec": 0, 00:19:33.276 "rw_mbytes_per_sec": 0, 00:19:33.276 "r_mbytes_per_sec": 0, 00:19:33.276 "w_mbytes_per_sec": 0 00:19:33.276 }, 00:19:33.276 "claimed": true, 00:19:33.276 "claim_type": "read_many_write_one", 00:19:33.276 "zoned": false, 00:19:33.276 "supported_io_types": { 00:19:33.276 "read": true, 00:19:33.276 "write": true, 00:19:33.276 "unmap": true, 00:19:33.276 "flush": true, 00:19:33.276 "reset": true, 00:19:33.276 "nvme_admin": true, 00:19:33.276 "nvme_io": true, 00:19:33.276 "nvme_io_md": false, 00:19:33.276 "write_zeroes": true, 00:19:33.276 "zcopy": false, 00:19:33.276 "get_zone_info": false, 00:19:33.277 "zone_management": false, 00:19:33.277 "zone_append": false, 00:19:33.277 "compare": true, 00:19:33.277 "compare_and_write": false, 00:19:33.277 "abort": true, 00:19:33.277 "seek_hole": false, 00:19:33.277 "seek_data": false, 00:19:33.277 "copy": true, 00:19:33.277 "nvme_iov_md": false 00:19:33.277 }, 00:19:33.277 "driver_specific": { 00:19:33.277 "nvme": [ 00:19:33.277 { 00:19:33.277 "pci_address": "0000:00:11.0", 00:19:33.277 "trid": { 00:19:33.277 "trtype": "PCIe", 00:19:33.277 "traddr": "0000:00:11.0" 00:19:33.277 }, 00:19:33.277 "ctrlr_data": { 00:19:33.277 "cntlid": 0, 00:19:33.277 "vendor_id": "0x1b36", 00:19:33.277 "model_number": "QEMU NVMe Ctrl", 00:19:33.277 "serial_number": "12341", 00:19:33.277 "firmware_revision": "8.0.0", 00:19:33.277 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:33.277 "oacs": { 00:19:33.277 "security": 0, 00:19:33.277 "format": 1, 00:19:33.277 "firmware": 0, 00:19:33.277 "ns_manage": 1 00:19:33.277 }, 00:19:33.277 "multi_ctrlr": false, 00:19:33.277 "ana_reporting": false 00:19:33.277 }, 00:19:33.277 "vs": { 00:19:33.277 "nvme_version": "1.4" 00:19:33.277 }, 00:19:33.277 "ns_data": { 00:19:33.277 "id": 1, 00:19:33.277 "can_share": false 00:19:33.277 } 00:19:33.277 } 00:19:33.277 ], 00:19:33.277 "mp_policy": "active_passive" 00:19:33.277 } 00:19:33.277 } 00:19:33.277 ]' 00:19:33.277 05:09:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:33.277 05:09:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:33.277 05:09:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:33.277 05:09:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:33.277 05:09:02 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:33.277 05:09:02 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:19:33.277 05:09:02 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:33.277 05:09:02 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:33.277 05:09:02 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:33.277 05:09:02 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:33.277 05:09:02 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:33.538 05:09:02 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=ea56ed91-ad4d-4536-8d77-f3ed03e8865c 00:19:33.538 05:09:02 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:33.538 05:09:02 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ea56ed91-ad4d-4536-8d77-f3ed03e8865c 00:19:33.800 05:09:02 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:34.062 05:09:03 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=28eb41a2-cb29-47c3-9a24-fd7e50964750 00:19:34.062 05:09:03 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 28eb41a2-cb29-47c3-9a24-fd7e50964750 00:19:34.325 05:09:03 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=1846feb6-2e6d-4283-8574-83c5c9d3fd2c 00:19:34.325 05:09:03 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:34.325 05:09:03 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1846feb6-2e6d-4283-8574-83c5c9d3fd2c 00:19:34.325 05:09:03 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:34.325 05:09:03 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:34.325 05:09:03 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=1846feb6-2e6d-4283-8574-83c5c9d3fd2c 00:19:34.325 05:09:03 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:34.325 05:09:03 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 1846feb6-2e6d-4283-8574-83c5c9d3fd2c 00:19:34.325 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=1846feb6-2e6d-4283-8574-83c5c9d3fd2c 00:19:34.325 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:34.325 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:34.325 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:34.325 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1846feb6-2e6d-4283-8574-83c5c9d3fd2c 00:19:34.587 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:34.587 { 00:19:34.587 "name": "1846feb6-2e6d-4283-8574-83c5c9d3fd2c", 00:19:34.587 "aliases": [ 00:19:34.587 "lvs/nvme0n1p0" 00:19:34.587 ], 00:19:34.587 "product_name": "Logical Volume", 00:19:34.587 "block_size": 4096, 00:19:34.587 "num_blocks": 26476544, 00:19:34.587 "uuid": "1846feb6-2e6d-4283-8574-83c5c9d3fd2c", 00:19:34.587 "assigned_rate_limits": { 00:19:34.587 "rw_ios_per_sec": 0, 00:19:34.587 "rw_mbytes_per_sec": 0, 00:19:34.588 "r_mbytes_per_sec": 0, 00:19:34.588 "w_mbytes_per_sec": 0 00:19:34.588 }, 00:19:34.588 "claimed": false, 00:19:34.588 "zoned": false, 00:19:34.588 "supported_io_types": { 00:19:34.588 "read": true, 00:19:34.588 "write": true, 00:19:34.588 "unmap": true, 00:19:34.588 "flush": false, 00:19:34.588 "reset": true, 00:19:34.588 "nvme_admin": false, 00:19:34.588 "nvme_io": false, 00:19:34.588 "nvme_io_md": false, 00:19:34.588 "write_zeroes": true, 00:19:34.588 "zcopy": false, 00:19:34.588 "get_zone_info": false, 00:19:34.588 "zone_management": false, 00:19:34.588 "zone_append": false, 00:19:34.588 "compare": false, 00:19:34.588 "compare_and_write": false, 00:19:34.588 "abort": false, 00:19:34.588 "seek_hole": true, 00:19:34.588 "seek_data": true, 00:19:34.588 "copy": false, 00:19:34.588 "nvme_iov_md": false 00:19:34.588 }, 00:19:34.588 "driver_specific": { 00:19:34.588 "lvol": { 00:19:34.588 "lvol_store_uuid": "28eb41a2-cb29-47c3-9a24-fd7e50964750", 00:19:34.588 "base_bdev": "nvme0n1", 00:19:34.588 "thin_provision": true, 00:19:34.588 "num_allocated_clusters": 0, 00:19:34.588 "snapshot": false, 00:19:34.588 "clone": false, 00:19:34.588 "esnap_clone": false 00:19:34.588 } 00:19:34.588 } 00:19:34.588 } 00:19:34.588 ]' 00:19:34.588 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:34.588 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:34.588 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:34.588 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:34.588 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:34.588 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:34.588 05:09:03 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:34.588 05:09:03 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:34.588 05:09:03 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:34.848 05:09:03 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:34.848 05:09:03 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:34.848 05:09:03 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 1846feb6-2e6d-4283-8574-83c5c9d3fd2c 00:19:34.848 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=1846feb6-2e6d-4283-8574-83c5c9d3fd2c 00:19:34.848 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:34.848 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:34.848 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:34.849 05:09:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1846feb6-2e6d-4283-8574-83c5c9d3fd2c 00:19:35.108 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:35.108 { 00:19:35.108 "name": "1846feb6-2e6d-4283-8574-83c5c9d3fd2c", 00:19:35.108 "aliases": [ 00:19:35.108 "lvs/nvme0n1p0" 00:19:35.108 ], 00:19:35.108 "product_name": "Logical Volume", 00:19:35.108 "block_size": 4096, 00:19:35.108 "num_blocks": 26476544, 00:19:35.108 "uuid": "1846feb6-2e6d-4283-8574-83c5c9d3fd2c", 00:19:35.108 "assigned_rate_limits": { 00:19:35.108 "rw_ios_per_sec": 0, 00:19:35.108 "rw_mbytes_per_sec": 0, 00:19:35.108 "r_mbytes_per_sec": 0, 00:19:35.108 "w_mbytes_per_sec": 0 00:19:35.108 }, 00:19:35.108 "claimed": false, 00:19:35.108 "zoned": false, 00:19:35.108 "supported_io_types": { 00:19:35.108 "read": true, 00:19:35.108 "write": true, 00:19:35.108 "unmap": true, 00:19:35.108 "flush": false, 00:19:35.108 "reset": true, 00:19:35.108 "nvme_admin": false, 00:19:35.108 "nvme_io": false, 00:19:35.108 "nvme_io_md": false, 00:19:35.108 "write_zeroes": true, 00:19:35.108 "zcopy": false, 00:19:35.108 "get_zone_info": false, 00:19:35.108 "zone_management": false, 00:19:35.108 "zone_append": false, 00:19:35.108 "compare": false, 00:19:35.108 "compare_and_write": false, 00:19:35.108 "abort": false, 00:19:35.108 "seek_hole": true, 00:19:35.108 "seek_data": true, 00:19:35.108 "copy": false, 00:19:35.108 "nvme_iov_md": false 00:19:35.108 }, 00:19:35.108 "driver_specific": { 00:19:35.108 "lvol": { 00:19:35.108 "lvol_store_uuid": "28eb41a2-cb29-47c3-9a24-fd7e50964750", 00:19:35.108 "base_bdev": "nvme0n1", 00:19:35.108 "thin_provision": true, 00:19:35.108 "num_allocated_clusters": 0, 00:19:35.108 "snapshot": false, 00:19:35.108 "clone": false, 00:19:35.108 "esnap_clone": false 00:19:35.108 } 00:19:35.108 } 00:19:35.108 } 00:19:35.108 ]' 00:19:35.108 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:35.108 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:35.108 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:35.108 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:35.108 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:35.108 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:35.108 05:09:04 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:35.108 05:09:04 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:35.367 05:09:04 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:35.367 05:09:04 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 1846feb6-2e6d-4283-8574-83c5c9d3fd2c 00:19:35.367 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=1846feb6-2e6d-4283-8574-83c5c9d3fd2c 00:19:35.367 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:35.367 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:35.367 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:35.367 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1846feb6-2e6d-4283-8574-83c5c9d3fd2c 00:19:35.367 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:35.367 { 00:19:35.367 "name": "1846feb6-2e6d-4283-8574-83c5c9d3fd2c", 00:19:35.367 "aliases": [ 00:19:35.367 "lvs/nvme0n1p0" 00:19:35.367 ], 00:19:35.367 "product_name": "Logical Volume", 00:19:35.367 "block_size": 4096, 00:19:35.367 "num_blocks": 26476544, 00:19:35.367 "uuid": "1846feb6-2e6d-4283-8574-83c5c9d3fd2c", 00:19:35.367 "assigned_rate_limits": { 00:19:35.367 "rw_ios_per_sec": 0, 00:19:35.367 "rw_mbytes_per_sec": 0, 00:19:35.367 "r_mbytes_per_sec": 0, 00:19:35.367 "w_mbytes_per_sec": 0 00:19:35.367 }, 00:19:35.367 "claimed": false, 00:19:35.367 "zoned": false, 00:19:35.367 "supported_io_types": { 00:19:35.367 "read": true, 00:19:35.367 "write": true, 00:19:35.367 "unmap": true, 00:19:35.367 "flush": false, 00:19:35.367 "reset": true, 00:19:35.367 "nvme_admin": false, 00:19:35.367 "nvme_io": false, 00:19:35.367 "nvme_io_md": false, 00:19:35.367 "write_zeroes": true, 00:19:35.367 "zcopy": false, 00:19:35.367 "get_zone_info": false, 00:19:35.367 "zone_management": false, 00:19:35.367 "zone_append": false, 00:19:35.367 "compare": false, 00:19:35.367 "compare_and_write": false, 00:19:35.367 "abort": false, 00:19:35.367 "seek_hole": true, 00:19:35.367 "seek_data": true, 00:19:35.367 "copy": false, 00:19:35.367 "nvme_iov_md": false 00:19:35.367 }, 00:19:35.367 "driver_specific": { 00:19:35.367 "lvol": { 00:19:35.367 "lvol_store_uuid": "28eb41a2-cb29-47c3-9a24-fd7e50964750", 00:19:35.367 "base_bdev": "nvme0n1", 00:19:35.367 "thin_provision": true, 00:19:35.367 "num_allocated_clusters": 0, 00:19:35.367 "snapshot": false, 00:19:35.367 "clone": false, 00:19:35.367 "esnap_clone": false 00:19:35.367 } 00:19:35.367 } 00:19:35.367 } 00:19:35.367 ]' 00:19:35.367 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:35.626 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:35.626 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:35.626 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:35.626 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:35.626 05:09:04 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:35.626 05:09:04 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:35.626 05:09:04 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1846feb6-2e6d-4283-8574-83c5c9d3fd2c --l2p_dram_limit 10' 00:19:35.626 05:09:04 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:35.626 05:09:04 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:35.626 05:09:04 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:35.626 05:09:04 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:35.626 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:35.627 05:09:04 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1846feb6-2e6d-4283-8574-83c5c9d3fd2c --l2p_dram_limit 10 -c nvc0n1p0 00:19:35.627 [2024-11-28 05:09:04.883419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-11-28 05:09:04.883456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:35.627 [2024-11-28 05:09:04.883467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.627 [2024-11-28 05:09:04.883475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-11-28 05:09:04.883517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-11-28 05:09:04.883528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.627 [2024-11-28 05:09:04.883534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:35.627 [2024-11-28 05:09:04.883542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-11-28 05:09:04.883557] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:35.627 [2024-11-28 05:09:04.883851] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:35.627 [2024-11-28 05:09:04.883871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-11-28 05:09:04.883879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.627 [2024-11-28 05:09:04.883885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:19:35.627 [2024-11-28 05:09:04.883892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-11-28 05:09:04.883917] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8ceea9fc-b1d3-4cf8-b052-f90915354471 00:19:35.627 [2024-11-28 05:09:04.884849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-11-28 05:09:04.884878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:35.627 [2024-11-28 05:09:04.884887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:35.627 [2024-11-28 05:09:04.884894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-11-28 05:09:04.889496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-11-28 05:09:04.889525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.627 [2024-11-28 05:09:04.889535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.546 ms 00:19:35.627 [2024-11-28 05:09:04.889541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-11-28 05:09:04.889602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-11-28 05:09:04.889609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.627 [2024-11-28 05:09:04.889617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:35.627 [2024-11-28 05:09:04.889623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-11-28 05:09:04.889664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-11-28 05:09:04.889671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:35.627 [2024-11-28 05:09:04.889688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:35.627 [2024-11-28 05:09:04.889695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-11-28 05:09:04.889715] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.627 [2024-11-28 05:09:04.890936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-11-28 05:09:04.890965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.627 [2024-11-28 05:09:04.890973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:19:35.627 [2024-11-28 05:09:04.890980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-11-28 05:09:04.891005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-11-28 05:09:04.891013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:35.627 [2024-11-28 05:09:04.891020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:35.627 [2024-11-28 05:09:04.891028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-11-28 05:09:04.891045] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:35.627 [2024-11-28 05:09:04.891155] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:35.627 [2024-11-28 05:09:04.891164] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:35.627 [2024-11-28 05:09:04.891173] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:35.627 [2024-11-28 05:09:04.891192] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:35.627 [2024-11-28 05:09:04.891203] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:35.627 [2024-11-28 05:09:04.891209] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:35.627 [2024-11-28 05:09:04.891222] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:35.627 [2024-11-28 05:09:04.891228] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:35.627 [2024-11-28 05:09:04.891234] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:35.627 [2024-11-28 05:09:04.891240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-11-28 05:09:04.891247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:35.627 [2024-11-28 05:09:04.891253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:19:35.627 [2024-11-28 05:09:04.891259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-11-28 05:09:04.891323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-11-28 05:09:04.891334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:35.627 [2024-11-28 05:09:04.891342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:35.627 [2024-11-28 05:09:04.891350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-11-28 05:09:04.891422] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:35.627 [2024-11-28 05:09:04.891430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:35.627 [2024-11-28 05:09:04.891436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.627 [2024-11-28 05:09:04.891443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.627 [2024-11-28 05:09:04.891451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:35.627 [2024-11-28 05:09:04.891457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:35.627 [2024-11-28 05:09:04.891462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:35.627 [2024-11-28 05:09:04.891580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:35.627 [2024-11-28 05:09:04.891586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:35.627 [2024-11-28 05:09:04.891595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.627 [2024-11-28 05:09:04.891601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:35.627 [2024-11-28 05:09:04.891608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:35.627 [2024-11-28 05:09:04.891613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.627 [2024-11-28 05:09:04.891622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:35.627 [2024-11-28 05:09:04.891628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:35.627 [2024-11-28 05:09:04.891635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.627 [2024-11-28 05:09:04.891640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:35.627 [2024-11-28 05:09:04.891648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:35.627 [2024-11-28 05:09:04.891654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.627 [2024-11-28 05:09:04.891661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:35.627 [2024-11-28 05:09:04.891667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:35.627 [2024-11-28 05:09:04.891674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.627 [2024-11-28 05:09:04.891680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:35.627 [2024-11-28 05:09:04.891686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:35.627 [2024-11-28 05:09:04.891693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.627 [2024-11-28 05:09:04.891700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:35.627 [2024-11-28 05:09:04.891706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:35.627 [2024-11-28 05:09:04.891714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.627 [2024-11-28 05:09:04.891720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:35.627 [2024-11-28 05:09:04.891728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:35.627 [2024-11-28 05:09:04.891734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.627 [2024-11-28 05:09:04.891741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:35.627 [2024-11-28 05:09:04.891747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:35.627 [2024-11-28 05:09:04.891754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.627 [2024-11-28 05:09:04.891761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:35.627 [2024-11-28 05:09:04.891769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:35.627 [2024-11-28 05:09:04.891775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.627 [2024-11-28 05:09:04.891782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:35.627 [2024-11-28 05:09:04.891787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:35.627 [2024-11-28 05:09:04.891794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.627 [2024-11-28 05:09:04.891800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:35.627 [2024-11-28 05:09:04.891808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:35.627 [2024-11-28 05:09:04.891813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.627 [2024-11-28 05:09:04.891820] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:35.628 [2024-11-28 05:09:04.891831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:35.628 [2024-11-28 05:09:04.891841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.628 [2024-11-28 05:09:04.891848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.628 [2024-11-28 05:09:04.891857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:35.628 [2024-11-28 05:09:04.891863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:35.628 [2024-11-28 05:09:04.891870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:35.628 [2024-11-28 05:09:04.891876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:35.628 [2024-11-28 05:09:04.891883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:35.628 [2024-11-28 05:09:04.891889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:35.628 [2024-11-28 05:09:04.891899] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:35.628 [2024-11-28 05:09:04.891907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.628 [2024-11-28 05:09:04.891916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:35.628 [2024-11-28 05:09:04.891922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:35.628 [2024-11-28 05:09:04.891932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:35.628 [2024-11-28 05:09:04.891938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:35.628 [2024-11-28 05:09:04.891946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:35.628 [2024-11-28 05:09:04.891952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:35.628 [2024-11-28 05:09:04.891961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:35.628 [2024-11-28 05:09:04.891967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:35.628 [2024-11-28 05:09:04.891974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:35.628 [2024-11-28 05:09:04.891979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:35.628 [2024-11-28 05:09:04.891986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:35.628 [2024-11-28 05:09:04.891991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:35.628 [2024-11-28 05:09:04.891997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:35.628 [2024-11-28 05:09:04.892002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:35.628 [2024-11-28 05:09:04.892009] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:35.628 [2024-11-28 05:09:04.892015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.628 [2024-11-28 05:09:04.892022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:35.628 [2024-11-28 05:09:04.892027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:35.628 [2024-11-28 05:09:04.892034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:35.628 [2024-11-28 05:09:04.892039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:35.628 [2024-11-28 05:09:04.892046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.628 [2024-11-28 05:09:04.892052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:35.628 [2024-11-28 05:09:04.892062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:19:35.628 [2024-11-28 05:09:04.892067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.628 [2024-11-28 05:09:04.892097] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:35.628 [2024-11-28 05:09:04.892108] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:39.836 [2024-11-28 05:09:08.715799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.715891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:39.836 [2024-11-28 05:09:08.715920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3823.678 ms 00:19:39.836 [2024-11-28 05:09:08.715930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.729437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.729498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:39.836 [2024-11-28 05:09:08.729514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.382 ms 00:19:39.836 [2024-11-28 05:09:08.729523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.729657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.729668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:39.836 [2024-11-28 05:09:08.729692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:39.836 [2024-11-28 05:09:08.729700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.742346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.742395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:39.836 [2024-11-28 05:09:08.742409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.602 ms 00:19:39.836 [2024-11-28 05:09:08.742421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.742458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.742467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:39.836 [2024-11-28 05:09:08.742479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:39.836 [2024-11-28 05:09:08.742487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.743094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.743137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:39.836 [2024-11-28 05:09:08.743152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:19:39.836 [2024-11-28 05:09:08.743161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.743311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.743323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:39.836 [2024-11-28 05:09:08.743336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:39.836 [2024-11-28 05:09:08.743345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.751571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.751613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:39.836 [2024-11-28 05:09:08.751631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.199 ms 00:19:39.836 [2024-11-28 05:09:08.751639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.772583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:39.836 [2024-11-28 05:09:08.776710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.776764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:39.836 [2024-11-28 05:09:08.776779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.000 ms 00:19:39.836 [2024-11-28 05:09:08.776793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.862987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.863057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:39.836 [2024-11-28 05:09:08.863070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.145 ms 00:19:39.836 [2024-11-28 05:09:08.863084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.863315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.863330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:39.836 [2024-11-28 05:09:08.863340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:19:39.836 [2024-11-28 05:09:08.863350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.868789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.868843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:39.836 [2024-11-28 05:09:08.868860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.417 ms 00:19:39.836 [2024-11-28 05:09:08.868871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.873745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.873796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:39.836 [2024-11-28 05:09:08.873807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.826 ms 00:19:39.836 [2024-11-28 05:09:08.873816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.874147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.874160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:39.836 [2024-11-28 05:09:08.874169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:19:39.836 [2024-11-28 05:09:08.874198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.919040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.919102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:39.836 [2024-11-28 05:09:08.919124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.805 ms 00:19:39.836 [2024-11-28 05:09:08.919135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-11-28 05:09:08.925829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-11-28 05:09:08.925883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:39.837 [2024-11-28 05:09:08.925894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.607 ms 00:19:39.837 [2024-11-28 05:09:08.925905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-11-28 05:09:08.931405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-11-28 05:09:08.931462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:39.837 [2024-11-28 05:09:08.931471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.453 ms 00:19:39.837 [2024-11-28 05:09:08.931481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-11-28 05:09:08.937319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-11-28 05:09:08.937372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:39.837 [2024-11-28 05:09:08.937382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.795 ms 00:19:39.837 [2024-11-28 05:09:08.937396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-11-28 05:09:08.937445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-11-28 05:09:08.937457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:39.837 [2024-11-28 05:09:08.937467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:39.837 [2024-11-28 05:09:08.937477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-11-28 05:09:08.937549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-11-28 05:09:08.937562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:39.837 [2024-11-28 05:09:08.937570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:39.837 [2024-11-28 05:09:08.937584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-11-28 05:09:08.938740] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4054.815 ms, result 0 00:19:39.837 { 00:19:39.837 "name": "ftl0", 00:19:39.837 "uuid": "8ceea9fc-b1d3-4cf8-b052-f90915354471" 00:19:39.837 } 00:19:39.837 05:09:08 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:39.837 05:09:08 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:40.098 05:09:09 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:40.098 05:09:09 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:40.361 [2024-11-28 05:09:09.381007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.361 [2024-11-28 05:09:09.381071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:40.361 [2024-11-28 05:09:09.381096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:40.361 [2024-11-28 05:09:09.381105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.361 [2024-11-28 05:09:09.381133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:40.361 [2024-11-28 05:09:09.381948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.361 [2024-11-28 05:09:09.381998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:40.361 [2024-11-28 05:09:09.382011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:19:40.361 [2024-11-28 05:09:09.382026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.361 [2024-11-28 05:09:09.382317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.361 [2024-11-28 05:09:09.382333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:40.361 [2024-11-28 05:09:09.382345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:19:40.361 [2024-11-28 05:09:09.382356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.361 [2024-11-28 05:09:09.385596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.361 [2024-11-28 05:09:09.385620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:40.362 [2024-11-28 05:09:09.385631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:19:40.362 [2024-11-28 05:09:09.385642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.362 [2024-11-28 05:09:09.391861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.362 [2024-11-28 05:09:09.391903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:40.362 [2024-11-28 05:09:09.391914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.194 ms 00:19:40.362 [2024-11-28 05:09:09.391927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.362 [2024-11-28 05:09:09.394689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.362 [2024-11-28 05:09:09.394747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:40.362 [2024-11-28 05:09:09.394758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.670 ms 00:19:40.362 [2024-11-28 05:09:09.394772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.362 [2024-11-28 05:09:09.401717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.362 [2024-11-28 05:09:09.401779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:40.362 [2024-11-28 05:09:09.401790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.900 ms 00:19:40.362 [2024-11-28 05:09:09.401801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.362 [2024-11-28 05:09:09.401934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.362 [2024-11-28 05:09:09.401951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:40.362 [2024-11-28 05:09:09.401966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:40.362 [2024-11-28 05:09:09.401976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.362 [2024-11-28 05:09:09.404903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.362 [2024-11-28 05:09:09.404963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:40.362 [2024-11-28 05:09:09.404973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.908 ms 00:19:40.362 [2024-11-28 05:09:09.404982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.362 [2024-11-28 05:09:09.407901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.362 [2024-11-28 05:09:09.407957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:40.362 [2024-11-28 05:09:09.407967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.875 ms 00:19:40.362 [2024-11-28 05:09:09.407977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.362 [2024-11-28 05:09:09.410536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.362 [2024-11-28 05:09:09.410587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:40.362 [2024-11-28 05:09:09.410597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.508 ms 00:19:40.362 [2024-11-28 05:09:09.410607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.362 [2024-11-28 05:09:09.413002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.362 [2024-11-28 05:09:09.413056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:40.362 [2024-11-28 05:09:09.413065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.324 ms 00:19:40.362 [2024-11-28 05:09:09.413076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.362 [2024-11-28 05:09:09.413116] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:40.362 [2024-11-28 05:09:09.413134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:40.362 [2024-11-28 05:09:09.413659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.413994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.414004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.414012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.414024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.414032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.414042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.414050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.414061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.414069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:40.363 [2024-11-28 05:09:09.414089] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:40.363 [2024-11-28 05:09:09.414098] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ceea9fc-b1d3-4cf8-b052-f90915354471 00:19:40.363 [2024-11-28 05:09:09.414109] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:40.363 [2024-11-28 05:09:09.414117] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:40.363 [2024-11-28 05:09:09.414126] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:40.363 [2024-11-28 05:09:09.414134] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:40.363 [2024-11-28 05:09:09.414147] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:40.363 [2024-11-28 05:09:09.414155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:40.363 [2024-11-28 05:09:09.414164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:40.363 [2024-11-28 05:09:09.414171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:40.363 [2024-11-28 05:09:09.414191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:40.363 [2024-11-28 05:09:09.414199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.363 [2024-11-28 05:09:09.414209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:40.363 [2024-11-28 05:09:09.414221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:19:40.363 [2024-11-28 05:09:09.414231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.363 [2024-11-28 05:09:09.416494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.363 [2024-11-28 05:09:09.416541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:40.363 [2024-11-28 05:09:09.416554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:19:40.363 [2024-11-28 05:09:09.416565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.363 [2024-11-28 05:09:09.416681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.363 [2024-11-28 05:09:09.416693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:40.363 [2024-11-28 05:09:09.416701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:40.363 [2024-11-28 05:09:09.416711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.363 [2024-11-28 05:09:09.424556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.363 [2024-11-28 05:09:09.424608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.363 [2024-11-28 05:09:09.424622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.363 [2024-11-28 05:09:09.424632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.363 [2024-11-28 05:09:09.424695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.363 [2024-11-28 05:09:09.424706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.363 [2024-11-28 05:09:09.424714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.363 [2024-11-28 05:09:09.424724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.363 [2024-11-28 05:09:09.424784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.363 [2024-11-28 05:09:09.424800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.363 [2024-11-28 05:09:09.424808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.363 [2024-11-28 05:09:09.424821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.363 [2024-11-28 05:09:09.424839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.363 [2024-11-28 05:09:09.424849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.363 [2024-11-28 05:09:09.424857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.363 [2024-11-28 05:09:09.424867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.363 [2024-11-28 05:09:09.439151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.363 [2024-11-28 05:09:09.439236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.363 [2024-11-28 05:09:09.439251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.363 [2024-11-28 05:09:09.439261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.363 [2024-11-28 05:09:09.450941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.363 [2024-11-28 05:09:09.451003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.363 [2024-11-28 05:09:09.451015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.363 [2024-11-28 05:09:09.451025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.363 [2024-11-28 05:09:09.451106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.363 [2024-11-28 05:09:09.451124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.363 [2024-11-28 05:09:09.451134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.363 [2024-11-28 05:09:09.451151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.363 [2024-11-28 05:09:09.451268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.363 [2024-11-28 05:09:09.451282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.363 [2024-11-28 05:09:09.451290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.363 [2024-11-28 05:09:09.451300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.363 [2024-11-28 05:09:09.451380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.363 [2024-11-28 05:09:09.451393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.364 [2024-11-28 05:09:09.451402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.364 [2024-11-28 05:09:09.451413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.364 [2024-11-28 05:09:09.451451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.364 [2024-11-28 05:09:09.451463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:40.364 [2024-11-28 05:09:09.451476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.364 [2024-11-28 05:09:09.451486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.364 [2024-11-28 05:09:09.451531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.364 [2024-11-28 05:09:09.451558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.364 [2024-11-28 05:09:09.451568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.364 [2024-11-28 05:09:09.451584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.364 [2024-11-28 05:09:09.451641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.364 [2024-11-28 05:09:09.451663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.364 [2024-11-28 05:09:09.451677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.364 [2024-11-28 05:09:09.451692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.364 [2024-11-28 05:09:09.451841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.792 ms, result 0 00:19:40.364 true 00:19:40.364 05:09:09 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 88014 00:19:40.364 05:09:09 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88014 ']' 00:19:40.364 05:09:09 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88014 00:19:40.364 05:09:09 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:19:40.364 05:09:09 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.364 05:09:09 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88014 00:19:40.364 killing process with pid 88014 00:19:40.364 05:09:09 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.364 05:09:09 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.364 05:09:09 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88014' 00:19:40.364 05:09:09 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 88014 00:19:40.364 05:09:09 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 88014 00:19:45.681 05:09:14 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:49.880 262144+0 records in 00:19:49.880 262144+0 records out 00:19:49.880 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.85853 s, 278 MB/s 00:19:49.880 05:09:18 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:51.256 05:09:20 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:51.256 [2024-11-28 05:09:20.224442] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:51.256 [2024-11-28 05:09:20.224537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88226 ] 00:19:51.256 [2024-11-28 05:09:20.372236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.256 [2024-11-28 05:09:20.392478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.256 [2024-11-28 05:09:20.483701] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:51.256 [2024-11-28 05:09:20.483770] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:51.520 [2024-11-28 05:09:20.640023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.520 [2024-11-28 05:09:20.640071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:51.520 [2024-11-28 05:09:20.640085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:51.520 [2024-11-28 05:09:20.640094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.520 [2024-11-28 05:09:20.640147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.520 [2024-11-28 05:09:20.640158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.520 [2024-11-28 05:09:20.640166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:51.520 [2024-11-28 05:09:20.640193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.520 [2024-11-28 05:09:20.640217] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:51.520 [2024-11-28 05:09:20.640682] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:51.520 [2024-11-28 05:09:20.640725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.520 [2024-11-28 05:09:20.640740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.520 [2024-11-28 05:09:20.640752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:19:51.520 [2024-11-28 05:09:20.640760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.520 [2024-11-28 05:09:20.642018] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:51.520 [2024-11-28 05:09:20.644901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.520 [2024-11-28 05:09:20.644938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:51.520 [2024-11-28 05:09:20.644948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.885 ms 00:19:51.520 [2024-11-28 05:09:20.644962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.520 [2024-11-28 05:09:20.645015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.520 [2024-11-28 05:09:20.645027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:51.520 [2024-11-28 05:09:20.645040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:51.520 [2024-11-28 05:09:20.645047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.520 [2024-11-28 05:09:20.650686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.520 [2024-11-28 05:09:20.650718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.520 [2024-11-28 05:09:20.650735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.596 ms 00:19:51.520 [2024-11-28 05:09:20.650745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.520 [2024-11-28 05:09:20.650823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.520 [2024-11-28 05:09:20.650832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.520 [2024-11-28 05:09:20.650840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:51.520 [2024-11-28 05:09:20.650849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.520 [2024-11-28 05:09:20.650889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.520 [2024-11-28 05:09:20.650898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:51.520 [2024-11-28 05:09:20.650906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:51.520 [2024-11-28 05:09:20.650916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.520 [2024-11-28 05:09:20.650939] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:51.520 [2024-11-28 05:09:20.652414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.520 [2024-11-28 05:09:20.652443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.520 [2024-11-28 05:09:20.652452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:19:51.520 [2024-11-28 05:09:20.652460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.520 [2024-11-28 05:09:20.652488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.520 [2024-11-28 05:09:20.652497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:51.520 [2024-11-28 05:09:20.652505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:51.520 [2024-11-28 05:09:20.652514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.520 [2024-11-28 05:09:20.652533] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:51.520 [2024-11-28 05:09:20.652554] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:51.520 [2024-11-28 05:09:20.652595] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:51.520 [2024-11-28 05:09:20.652610] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:51.520 [2024-11-28 05:09:20.652711] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:51.520 [2024-11-28 05:09:20.652721] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:51.520 [2024-11-28 05:09:20.652734] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:51.521 [2024-11-28 05:09:20.652744] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:51.521 [2024-11-28 05:09:20.652753] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:51.521 [2024-11-28 05:09:20.652761] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:51.521 [2024-11-28 05:09:20.652769] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:51.521 [2024-11-28 05:09:20.652781] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:51.521 [2024-11-28 05:09:20.652789] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:51.521 [2024-11-28 05:09:20.652796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.521 [2024-11-28 05:09:20.652803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:51.521 [2024-11-28 05:09:20.652810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:51.521 [2024-11-28 05:09:20.652817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.521 [2024-11-28 05:09:20.652903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.521 [2024-11-28 05:09:20.652911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:51.521 [2024-11-28 05:09:20.652919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:51.521 [2024-11-28 05:09:20.652929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.521 [2024-11-28 05:09:20.653027] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:51.521 [2024-11-28 05:09:20.653043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:51.521 [2024-11-28 05:09:20.653052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:51.521 [2024-11-28 05:09:20.653061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:51.521 [2024-11-28 05:09:20.653077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:51.521 [2024-11-28 05:09:20.653093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:51.521 [2024-11-28 05:09:20.653101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:51.521 [2024-11-28 05:09:20.653116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:51.521 [2024-11-28 05:09:20.653125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:51.521 [2024-11-28 05:09:20.653133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:51.521 [2024-11-28 05:09:20.653141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:51.521 [2024-11-28 05:09:20.653150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:51.521 [2024-11-28 05:09:20.653158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:51.521 [2024-11-28 05:09:20.653173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:51.521 [2024-11-28 05:09:20.653195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:51.521 [2024-11-28 05:09:20.653211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.521 [2024-11-28 05:09:20.653227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:51.521 [2024-11-28 05:09:20.653234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.521 [2024-11-28 05:09:20.653249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:51.521 [2024-11-28 05:09:20.653257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.521 [2024-11-28 05:09:20.653280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:51.521 [2024-11-28 05:09:20.653288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:51.521 [2024-11-28 05:09:20.653303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:51.521 [2024-11-28 05:09:20.653310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:51.521 [2024-11-28 05:09:20.653325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:51.521 [2024-11-28 05:09:20.653332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:51.521 [2024-11-28 05:09:20.653340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:51.521 [2024-11-28 05:09:20.653347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:51.521 [2024-11-28 05:09:20.653355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:51.521 [2024-11-28 05:09:20.653363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:51.521 [2024-11-28 05:09:20.653377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:51.521 [2024-11-28 05:09:20.653385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653394] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:51.521 [2024-11-28 05:09:20.653404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:51.521 [2024-11-28 05:09:20.653412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:51.521 [2024-11-28 05:09:20.653420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:51.521 [2024-11-28 05:09:20.653429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:51.521 [2024-11-28 05:09:20.653435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:51.521 [2024-11-28 05:09:20.653442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:51.521 [2024-11-28 05:09:20.653449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:51.521 [2024-11-28 05:09:20.653455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:51.521 [2024-11-28 05:09:20.653461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:51.521 [2024-11-28 05:09:20.653469] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:51.521 [2024-11-28 05:09:20.653481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:51.521 [2024-11-28 05:09:20.653493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:51.521 [2024-11-28 05:09:20.653500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:51.521 [2024-11-28 05:09:20.653506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:51.521 [2024-11-28 05:09:20.653513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:51.521 [2024-11-28 05:09:20.653522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:51.521 [2024-11-28 05:09:20.653529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:51.521 [2024-11-28 05:09:20.653536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:51.521 [2024-11-28 05:09:20.653544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:51.521 [2024-11-28 05:09:20.653552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:51.522 [2024-11-28 05:09:20.653564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:51.522 [2024-11-28 05:09:20.653571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:51.522 [2024-11-28 05:09:20.653578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:51.522 [2024-11-28 05:09:20.653585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:51.522 [2024-11-28 05:09:20.653592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:51.522 [2024-11-28 05:09:20.653600] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:51.522 [2024-11-28 05:09:20.653608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:51.522 [2024-11-28 05:09:20.653618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:51.522 [2024-11-28 05:09:20.653625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:51.522 [2024-11-28 05:09:20.653632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:51.522 [2024-11-28 05:09:20.653639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:51.522 [2024-11-28 05:09:20.653648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.653656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:51.522 [2024-11-28 05:09:20.653663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:19:51.522 [2024-11-28 05:09:20.653673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.663606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.663645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.522 [2024-11-28 05:09:20.663659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.873 ms 00:19:51.522 [2024-11-28 05:09:20.663667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.663753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.663762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:51.522 [2024-11-28 05:09:20.663770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:51.522 [2024-11-28 05:09:20.663778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.680610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.680659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.522 [2024-11-28 05:09:20.680673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.785 ms 00:19:51.522 [2024-11-28 05:09:20.680683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.680740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.680751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.522 [2024-11-28 05:09:20.680761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:51.522 [2024-11-28 05:09:20.680769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.681218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.681245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.522 [2024-11-28 05:09:20.681257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:19:51.522 [2024-11-28 05:09:20.681267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.681411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.681431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.522 [2024-11-28 05:09:20.681442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:19:51.522 [2024-11-28 05:09:20.681453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.687592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.687628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.522 [2024-11-28 05:09:20.687638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:19:51.522 [2024-11-28 05:09:20.687645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.690670] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:51.522 [2024-11-28 05:09:20.690825] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:51.522 [2024-11-28 05:09:20.690844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.690852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:51.522 [2024-11-28 05:09:20.690860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.117 ms 00:19:51.522 [2024-11-28 05:09:20.690867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.706102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.706254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:51.522 [2024-11-28 05:09:20.706272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.193 ms 00:19:51.522 [2024-11-28 05:09:20.706280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.708418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.708455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:51.522 [2024-11-28 05:09:20.708464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:19:51.522 [2024-11-28 05:09:20.708471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.710449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.710483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:51.522 [2024-11-28 05:09:20.710492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.942 ms 00:19:51.522 [2024-11-28 05:09:20.710499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.710842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.710854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:51.522 [2024-11-28 05:09:20.710863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:19:51.522 [2024-11-28 05:09:20.710875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.729403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.729595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:51.522 [2024-11-28 05:09:20.729613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.511 ms 00:19:51.522 [2024-11-28 05:09:20.729622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.737333] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:51.522 [2024-11-28 05:09:20.739888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.739923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:51.522 [2024-11-28 05:09:20.739939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.224 ms 00:19:51.522 [2024-11-28 05:09:20.739953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.740025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.740036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:51.522 [2024-11-28 05:09:20.740045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:51.522 [2024-11-28 05:09:20.740059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.522 [2024-11-28 05:09:20.740150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.522 [2024-11-28 05:09:20.740161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:51.522 [2024-11-28 05:09:20.740169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:51.523 [2024-11-28 05:09:20.740209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-11-28 05:09:20.740233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-11-28 05:09:20.740242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:51.523 [2024-11-28 05:09:20.740254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:51.523 [2024-11-28 05:09:20.740262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-11-28 05:09:20.740295] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:51.523 [2024-11-28 05:09:20.740305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-11-28 05:09:20.740313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:51.523 [2024-11-28 05:09:20.740324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:51.523 [2024-11-28 05:09:20.740359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-11-28 05:09:20.744951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-11-28 05:09:20.744990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:51.523 [2024-11-28 05:09:20.745000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.574 ms 00:19:51.523 [2024-11-28 05:09:20.745008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-11-28 05:09:20.745087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.523 [2024-11-28 05:09:20.745097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:51.523 [2024-11-28 05:09:20.745105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:51.523 [2024-11-28 05:09:20.745115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.523 [2024-11-28 05:09:20.746091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.641 ms, result 0 00:19:52.911  [2024-11-28T05:09:22.769Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-28T05:09:24.143Z] Copying: 25/1024 [MB] (12 MBps) [2024-11-28T05:09:25.085Z] Copying: 65/1024 [MB] (39 MBps) [2024-11-28T05:09:26.044Z] Copying: 91/1024 [MB] (26 MBps) [2024-11-28T05:09:26.991Z] Copying: 107/1024 [MB] (15 MBps) [2024-11-28T05:09:27.930Z] Copying: 131/1024 [MB] (23 MBps) [2024-11-28T05:09:28.870Z] Copying: 143/1024 [MB] (11 MBps) [2024-11-28T05:09:29.813Z] Copying: 163/1024 [MB] (19 MBps) [2024-11-28T05:09:30.758Z] Copying: 178/1024 [MB] (14 MBps) [2024-11-28T05:09:32.141Z] Copying: 193/1024 [MB] (15 MBps) [2024-11-28T05:09:33.085Z] Copying: 207/1024 [MB] (13 MBps) [2024-11-28T05:09:34.027Z] Copying: 225/1024 [MB] (17 MBps) [2024-11-28T05:09:34.964Z] Copying: 243/1024 [MB] (18 MBps) [2024-11-28T05:09:35.909Z] Copying: 257/1024 [MB] (14 MBps) [2024-11-28T05:09:36.840Z] Copying: 298/1024 [MB] (40 MBps) [2024-11-28T05:09:37.774Z] Copying: 338/1024 [MB] (39 MBps) [2024-11-28T05:09:39.149Z] Copying: 378/1024 [MB] (40 MBps) [2024-11-28T05:09:40.089Z] Copying: 420/1024 [MB] (41 MBps) [2024-11-28T05:09:41.034Z] Copying: 453/1024 [MB] (33 MBps) [2024-11-28T05:09:41.981Z] Copying: 464/1024 [MB] (10 MBps) [2024-11-28T05:09:42.926Z] Copying: 475/1024 [MB] (11 MBps) [2024-11-28T05:09:43.871Z] Copying: 488/1024 [MB] (12 MBps) [2024-11-28T05:09:44.816Z] Copying: 498/1024 [MB] (10 MBps) [2024-11-28T05:09:45.759Z] Copying: 511/1024 [MB] (13 MBps) [2024-11-28T05:09:47.146Z] Copying: 521/1024 [MB] (10 MBps) [2024-11-28T05:09:48.091Z] Copying: 539/1024 [MB] (18 MBps) [2024-11-28T05:09:49.037Z] Copying: 552/1024 [MB] (12 MBps) [2024-11-28T05:09:49.982Z] Copying: 562/1024 [MB] (10 MBps) [2024-11-28T05:09:50.923Z] Copying: 586408/1048576 [kB] (10224 kBps) [2024-11-28T05:09:51.869Z] Copying: 592/1024 [MB] (19 MBps) [2024-11-28T05:09:52.812Z] Copying: 607/1024 [MB] (15 MBps) [2024-11-28T05:09:54.199Z] Copying: 621/1024 [MB] (14 MBps) [2024-11-28T05:09:54.774Z] Copying: 633/1024 [MB] (11 MBps) [2024-11-28T05:09:56.188Z] Copying: 648/1024 [MB] (15 MBps) [2024-11-28T05:09:57.189Z] Copying: 660/1024 [MB] (12 MBps) [2024-11-28T05:09:58.123Z] Copying: 702/1024 [MB] (41 MBps) [2024-11-28T05:09:59.064Z] Copying: 742/1024 [MB] (39 MBps) [2024-11-28T05:10:00.010Z] Copying: 784/1024 [MB] (42 MBps) [2024-11-28T05:10:00.952Z] Copying: 798/1024 [MB] (13 MBps) [2024-11-28T05:10:01.895Z] Copying: 818/1024 [MB] (20 MBps) [2024-11-28T05:10:02.835Z] Copying: 832/1024 [MB] (14 MBps) [2024-11-28T05:10:03.767Z] Copying: 860/1024 [MB] (28 MBps) [2024-11-28T05:10:05.155Z] Copying: 903/1024 [MB] (42 MBps) [2024-11-28T05:10:06.100Z] Copying: 925/1024 [MB] (21 MBps) [2024-11-28T05:10:07.043Z] Copying: 936/1024 [MB] (10 MBps) [2024-11-28T05:10:07.979Z] Copying: 946/1024 [MB] (10 MBps) [2024-11-28T05:10:08.909Z] Copying: 973/1024 [MB] (26 MBps) [2024-11-28T05:10:09.169Z] Copying: 1017/1024 [MB] (44 MBps) [2024-11-28T05:10:09.169Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-28 05:10:08.911576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.885 [2024-11-28 05:10:08.911612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:39.885 [2024-11-28 05:10:08.911623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:39.885 [2024-11-28 05:10:08.911636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.885 [2024-11-28 05:10:08.911653] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.885 [2024-11-28 05:10:08.912038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.885 [2024-11-28 05:10:08.912052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:39.885 [2024-11-28 05:10:08.912061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:20:39.885 [2024-11-28 05:10:08.912068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.885 [2024-11-28 05:10:08.913473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.885 [2024-11-28 05:10:08.913516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:39.885 [2024-11-28 05:10:08.913523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:20:39.885 [2024-11-28 05:10:08.913529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.885 [2024-11-28 05:10:08.928014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.885 [2024-11-28 05:10:08.928040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:39.885 [2024-11-28 05:10:08.928048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.469 ms 00:20:39.885 [2024-11-28 05:10:08.928054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.885 [2024-11-28 05:10:08.932825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.885 [2024-11-28 05:10:08.932846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:39.885 [2024-11-28 05:10:08.932854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.748 ms 00:20:39.885 [2024-11-28 05:10:08.932860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.885 [2024-11-28 05:10:08.933734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.885 [2024-11-28 05:10:08.933755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:39.885 [2024-11-28 05:10:08.933763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:20:39.885 [2024-11-28 05:10:08.933769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.885 [2024-11-28 05:10:08.936760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.885 [2024-11-28 05:10:08.936787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:39.885 [2024-11-28 05:10:08.936799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.968 ms 00:20:39.885 [2024-11-28 05:10:08.936805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.885 [2024-11-28 05:10:08.936887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.885 [2024-11-28 05:10:08.936894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:39.885 [2024-11-28 05:10:08.936904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:39.885 [2024-11-28 05:10:08.936909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.885 [2024-11-28 05:10:08.938536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.885 [2024-11-28 05:10:08.938645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:39.885 [2024-11-28 05:10:08.938656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:20:39.885 [2024-11-28 05:10:08.938662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.885 [2024-11-28 05:10:08.939863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.885 [2024-11-28 05:10:08.939888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:39.885 [2024-11-28 05:10:08.939895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:20:39.885 [2024-11-28 05:10:08.939900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.885 [2024-11-28 05:10:08.941020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.885 [2024-11-28 05:10:08.941046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:39.885 [2024-11-28 05:10:08.941052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:20:39.885 [2024-11-28 05:10:08.941057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.885 [2024-11-28 05:10:08.942061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.885 [2024-11-28 05:10:08.942153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:39.885 [2024-11-28 05:10:08.942164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:20:39.885 [2024-11-28 05:10:08.942169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.885 [2024-11-28 05:10:08.942198] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:39.885 [2024-11-28 05:10:08.942209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:39.885 [2024-11-28 05:10:08.942422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:39.886 [2024-11-28 05:10:08.942789] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:39.886 [2024-11-28 05:10:08.942794] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ceea9fc-b1d3-4cf8-b052-f90915354471 00:20:39.886 [2024-11-28 05:10:08.942801] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:39.886 [2024-11-28 05:10:08.942806] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:39.886 [2024-11-28 05:10:08.942811] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:39.886 [2024-11-28 05:10:08.942816] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:39.886 [2024-11-28 05:10:08.942822] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:39.886 [2024-11-28 05:10:08.942827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:39.886 [2024-11-28 05:10:08.942833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:39.886 [2024-11-28 05:10:08.942838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:39.886 [2024-11-28 05:10:08.942843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:39.886 [2024-11-28 05:10:08.942848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.886 [2024-11-28 05:10:08.942858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:39.886 [2024-11-28 05:10:08.942866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:20:39.886 [2024-11-28 05:10:08.942874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.886 [2024-11-28 05:10:08.944042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.886 [2024-11-28 05:10:08.944055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:39.886 [2024-11-28 05:10:08.944061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:20:39.886 [2024-11-28 05:10:08.944068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.886 [2024-11-28 05:10:08.944133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.886 [2024-11-28 05:10:08.944140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:39.886 [2024-11-28 05:10:08.944146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:39.886 [2024-11-28 05:10:08.944152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.886 [2024-11-28 05:10:08.948467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.886 [2024-11-28 05:10:08.948555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.886 [2024-11-28 05:10:08.948602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.886 [2024-11-28 05:10:08.948619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.886 [2024-11-28 05:10:08.948676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.886 [2024-11-28 05:10:08.948764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.886 [2024-11-28 05:10:08.948783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.886 [2024-11-28 05:10:08.948801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.886 [2024-11-28 05:10:08.948854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.886 [2024-11-28 05:10:08.948944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.886 [2024-11-28 05:10:08.948962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-11-28 05:10:08.948977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-11-28 05:10:08.948999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-11-28 05:10:08.949017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.887 [2024-11-28 05:10:08.949032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-11-28 05:10:08.949077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-11-28 05:10:08.956404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-11-28 05:10:08.956503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.887 [2024-11-28 05:10:08.956555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-11-28 05:10:08.956574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-11-28 05:10:08.962400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-11-28 05:10:08.962506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.887 [2024-11-28 05:10:08.962545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-11-28 05:10:08.962563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-11-28 05:10:08.962591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-11-28 05:10:08.962685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.887 [2024-11-28 05:10:08.962703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-11-28 05:10:08.962718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-11-28 05:10:08.962758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-11-28 05:10:08.962825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.887 [2024-11-28 05:10:08.962847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-11-28 05:10:08.962861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-11-28 05:10:08.962924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-11-28 05:10:08.962972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.887 [2024-11-28 05:10:08.962989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-11-28 05:10:08.963003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-11-28 05:10:08.963040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-11-28 05:10:08.963079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:39.887 [2024-11-28 05:10:08.963096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-11-28 05:10:08.963114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-11-28 05:10:08.963293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-11-28 05:10:08.963340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.887 [2024-11-28 05:10:08.963396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-11-28 05:10:08.963413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-11-28 05:10:08.963456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.887 [2024-11-28 05:10:08.963531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.887 [2024-11-28 05:10:08.963552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.887 [2024-11-28 05:10:08.963571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.887 [2024-11-28 05:10:08.963681] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.078 ms, result 0 00:20:40.147 00:20:40.147 00:20:40.147 05:10:09 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:40.147 [2024-11-28 05:10:09.318032] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:20:40.147 [2024-11-28 05:10:09.318239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88733 ] 00:20:40.407 [2024-11-28 05:10:09.452620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.407 [2024-11-28 05:10:09.469926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.407 [2024-11-28 05:10:09.550836] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.407 [2024-11-28 05:10:09.551031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.671 [2024-11-28 05:10:09.692752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.671 [2024-11-28 05:10:09.692906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.671 [2024-11-28 05:10:09.692979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:40.671 [2024-11-28 05:10:09.693010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.671 [2024-11-28 05:10:09.693079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.671 [2024-11-28 05:10:09.693105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.671 [2024-11-28 05:10:09.693125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:40.671 [2024-11-28 05:10:09.693152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.671 [2024-11-28 05:10:09.693264] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.671 [2024-11-28 05:10:09.693561] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.671 [2024-11-28 05:10:09.693608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.671 [2024-11-28 05:10:09.693673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.671 [2024-11-28 05:10:09.693702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:20:40.671 [2024-11-28 05:10:09.693714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.671 [2024-11-28 05:10:09.694811] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:40.671 [2024-11-28 05:10:09.697521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.671 [2024-11-28 05:10:09.697554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:40.671 [2024-11-28 05:10:09.697564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.711 ms 00:20:40.671 [2024-11-28 05:10:09.697577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.671 [2024-11-28 05:10:09.697627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.671 [2024-11-28 05:10:09.697637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:40.671 [2024-11-28 05:10:09.697647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:40.671 [2024-11-28 05:10:09.697654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.671 [2024-11-28 05:10:09.702718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.671 [2024-11-28 05:10:09.702756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.671 [2024-11-28 05:10:09.702766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.019 ms 00:20:40.671 [2024-11-28 05:10:09.702773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.671 [2024-11-28 05:10:09.702849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.671 [2024-11-28 05:10:09.702858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.671 [2024-11-28 05:10:09.702866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:40.671 [2024-11-28 05:10:09.702873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.671 [2024-11-28 05:10:09.702926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.671 [2024-11-28 05:10:09.702939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.671 [2024-11-28 05:10:09.702950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:40.671 [2024-11-28 05:10:09.702957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.671 [2024-11-28 05:10:09.702980] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.671 [2024-11-28 05:10:09.704359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.671 [2024-11-28 05:10:09.704384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.671 [2024-11-28 05:10:09.704396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.387 ms 00:20:40.671 [2024-11-28 05:10:09.704403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.671 [2024-11-28 05:10:09.704431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.671 [2024-11-28 05:10:09.704438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.671 [2024-11-28 05:10:09.704448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:40.671 [2024-11-28 05:10:09.704455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.671 [2024-11-28 05:10:09.704473] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:40.671 [2024-11-28 05:10:09.704492] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:40.671 [2024-11-28 05:10:09.704529] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:40.671 [2024-11-28 05:10:09.704546] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:40.671 [2024-11-28 05:10:09.704650] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:40.671 [2024-11-28 05:10:09.704663] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.671 [2024-11-28 05:10:09.704674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:40.671 [2024-11-28 05:10:09.704684] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.671 [2024-11-28 05:10:09.704692] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.671 [2024-11-28 05:10:09.704700] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:40.671 [2024-11-28 05:10:09.704710] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.671 [2024-11-28 05:10:09.704717] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:40.671 [2024-11-28 05:10:09.704723] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:40.671 [2024-11-28 05:10:09.704734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.671 [2024-11-28 05:10:09.704741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.671 [2024-11-28 05:10:09.704749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:20:40.671 [2024-11-28 05:10:09.704757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.671 [2024-11-28 05:10:09.704840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.671 [2024-11-28 05:10:09.704848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.671 [2024-11-28 05:10:09.704855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:40.671 [2024-11-28 05:10:09.704861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.671 [2024-11-28 05:10:09.704963] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.671 [2024-11-28 05:10:09.704976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.671 [2024-11-28 05:10:09.704984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.671 [2024-11-28 05:10:09.704991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.671 [2024-11-28 05:10:09.705002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.671 [2024-11-28 05:10:09.705009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.671 [2024-11-28 05:10:09.705017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:40.671 [2024-11-28 05:10:09.705025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.671 [2024-11-28 05:10:09.705033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:40.671 [2024-11-28 05:10:09.705041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.671 [2024-11-28 05:10:09.705049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.671 [2024-11-28 05:10:09.705057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:40.671 [2024-11-28 05:10:09.705066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.671 [2024-11-28 05:10:09.705074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.672 [2024-11-28 05:10:09.705081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:40.672 [2024-11-28 05:10:09.705089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.672 [2024-11-28 05:10:09.705096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.672 [2024-11-28 05:10:09.705104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:40.672 [2024-11-28 05:10:09.705111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.672 [2024-11-28 05:10:09.705119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.672 [2024-11-28 05:10:09.705126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:40.672 [2024-11-28 05:10:09.705135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.672 [2024-11-28 05:10:09.705142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.672 [2024-11-28 05:10:09.705150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:40.672 [2024-11-28 05:10:09.705157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.672 [2024-11-28 05:10:09.705165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.672 [2024-11-28 05:10:09.705172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:40.672 [2024-11-28 05:10:09.705191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.672 [2024-11-28 05:10:09.705203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.672 [2024-11-28 05:10:09.705211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:40.672 [2024-11-28 05:10:09.705218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.672 [2024-11-28 05:10:09.705226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.672 [2024-11-28 05:10:09.705233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:40.672 [2024-11-28 05:10:09.705241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.672 [2024-11-28 05:10:09.705248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.672 [2024-11-28 05:10:09.705255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:40.672 [2024-11-28 05:10:09.705263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.672 [2024-11-28 05:10:09.705270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:40.672 [2024-11-28 05:10:09.705278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:40.672 [2024-11-28 05:10:09.705285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.672 [2024-11-28 05:10:09.705292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:40.672 [2024-11-28 05:10:09.705300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:40.672 [2024-11-28 05:10:09.705307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.672 [2024-11-28 05:10:09.705315] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.672 [2024-11-28 05:10:09.705325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.672 [2024-11-28 05:10:09.705334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.672 [2024-11-28 05:10:09.705342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.672 [2024-11-28 05:10:09.705353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.672 [2024-11-28 05:10:09.705361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.672 [2024-11-28 05:10:09.705369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.672 [2024-11-28 05:10:09.705376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.672 [2024-11-28 05:10:09.705383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.672 [2024-11-28 05:10:09.705389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.672 [2024-11-28 05:10:09.705398] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.672 [2024-11-28 05:10:09.705407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.672 [2024-11-28 05:10:09.705415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:40.672 [2024-11-28 05:10:09.705422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:40.672 [2024-11-28 05:10:09.705430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:40.672 [2024-11-28 05:10:09.705437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:40.672 [2024-11-28 05:10:09.705443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:40.672 [2024-11-28 05:10:09.705452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:40.672 [2024-11-28 05:10:09.705459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:40.672 [2024-11-28 05:10:09.705466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:40.672 [2024-11-28 05:10:09.705473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:40.672 [2024-11-28 05:10:09.705484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:40.672 [2024-11-28 05:10:09.705491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:40.672 [2024-11-28 05:10:09.705498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:40.672 [2024-11-28 05:10:09.705505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:40.672 [2024-11-28 05:10:09.705512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:40.672 [2024-11-28 05:10:09.705518] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.672 [2024-11-28 05:10:09.705526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.672 [2024-11-28 05:10:09.705534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.672 [2024-11-28 05:10:09.705541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.672 [2024-11-28 05:10:09.705548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.672 [2024-11-28 05:10:09.705555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.672 [2024-11-28 05:10:09.705562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.672 [2024-11-28 05:10:09.705571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.672 [2024-11-28 05:10:09.705580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:20:40.672 [2024-11-28 05:10:09.705587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.672 [2024-11-28 05:10:09.714816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.672 [2024-11-28 05:10:09.714928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.672 [2024-11-28 05:10:09.714980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.190 ms 00:20:40.672 [2024-11-28 05:10:09.715002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.672 [2024-11-28 05:10:09.715101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.672 [2024-11-28 05:10:09.715122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.672 [2024-11-28 05:10:09.715145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:40.672 [2024-11-28 05:10:09.715164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.672 [2024-11-28 05:10:09.737427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.672 [2024-11-28 05:10:09.737665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.672 [2024-11-28 05:10:09.737819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.131 ms 00:20:40.672 [2024-11-28 05:10:09.737870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.672 [2024-11-28 05:10:09.737971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.672 [2024-11-28 05:10:09.738021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.672 [2024-11-28 05:10:09.738062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:40.672 [2024-11-28 05:10:09.738119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.672 [2024-11-28 05:10:09.738682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.672 [2024-11-28 05:10:09.738858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.672 [2024-11-28 05:10:09.738949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:20:40.672 [2024-11-28 05:10:09.738994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.672 [2024-11-28 05:10:09.739295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.672 [2024-11-28 05:10:09.739357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.672 [2024-11-28 05:10:09.739479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:20:40.672 [2024-11-28 05:10:09.739525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.672 [2024-11-28 05:10:09.745910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.672 [2024-11-28 05:10:09.746027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.672 [2024-11-28 05:10:09.746087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.306 ms 00:20:40.672 [2024-11-28 05:10:09.746109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.672 [2024-11-28 05:10:09.749067] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:40.672 [2024-11-28 05:10:09.749215] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:40.672 [2024-11-28 05:10:09.749279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.672 [2024-11-28 05:10:09.749299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:40.672 [2024-11-28 05:10:09.749319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.066 ms 00:20:40.672 [2024-11-28 05:10:09.749337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.764111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.673 [2024-11-28 05:10:09.764260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:40.673 [2024-11-28 05:10:09.764314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.707 ms 00:20:40.673 [2024-11-28 05:10:09.764337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.766566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.673 [2024-11-28 05:10:09.766679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:40.673 [2024-11-28 05:10:09.766727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:20:40.673 [2024-11-28 05:10:09.766748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.768802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.673 [2024-11-28 05:10:09.768918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:40.673 [2024-11-28 05:10:09.768965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.939 ms 00:20:40.673 [2024-11-28 05:10:09.768986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.769561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.673 [2024-11-28 05:10:09.769711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:40.673 [2024-11-28 05:10:09.769771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:20:40.673 [2024-11-28 05:10:09.769804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.787381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.673 [2024-11-28 05:10:09.787544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:40.673 [2024-11-28 05:10:09.787599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.524 ms 00:20:40.673 [2024-11-28 05:10:09.787621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.795316] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:40.673 [2024-11-28 05:10:09.797812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.673 [2024-11-28 05:10:09.797919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:40.673 [2024-11-28 05:10:09.797967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.095 ms 00:20:40.673 [2024-11-28 05:10:09.797989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.798095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.673 [2024-11-28 05:10:09.798126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:40.673 [2024-11-28 05:10:09.798153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:40.673 [2024-11-28 05:10:09.798232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.798326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.673 [2024-11-28 05:10:09.798425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:40.673 [2024-11-28 05:10:09.798448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:40.673 [2024-11-28 05:10:09.798513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.798559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.673 [2024-11-28 05:10:09.798580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:40.673 [2024-11-28 05:10:09.798600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:40.673 [2024-11-28 05:10:09.798656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.798704] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:40.673 [2024-11-28 05:10:09.798730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.673 [2024-11-28 05:10:09.798752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:40.673 [2024-11-28 05:10:09.798793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:40.673 [2024-11-28 05:10:09.799310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.803669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.673 [2024-11-28 05:10:09.803711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:40.673 [2024-11-28 05:10:09.803732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.304 ms 00:20:40.673 [2024-11-28 05:10:09.803740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.803813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.673 [2024-11-28 05:10:09.803823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:40.673 [2024-11-28 05:10:09.803836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:40.673 [2024-11-28 05:10:09.803844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.673 [2024-11-28 05:10:09.804800] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.644 ms, result 0 00:20:42.052  [2024-11-28T05:10:12.279Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-28T05:10:13.223Z] Copying: 39/1024 [MB] (21 MBps) [2024-11-28T05:10:14.167Z] Copying: 55/1024 [MB] (16 MBps) [2024-11-28T05:10:15.109Z] Copying: 70/1024 [MB] (15 MBps) [2024-11-28T05:10:16.052Z] Copying: 90/1024 [MB] (19 MBps) [2024-11-28T05:10:16.994Z] Copying: 104/1024 [MB] (14 MBps) [2024-11-28T05:10:18.379Z] Copying: 122/1024 [MB] (18 MBps) [2024-11-28T05:10:19.325Z] Copying: 140/1024 [MB] (17 MBps) [2024-11-28T05:10:20.269Z] Copying: 160/1024 [MB] (19 MBps) [2024-11-28T05:10:21.212Z] Copying: 180/1024 [MB] (20 MBps) [2024-11-28T05:10:22.157Z] Copying: 191/1024 [MB] (10 MBps) [2024-11-28T05:10:23.098Z] Copying: 209/1024 [MB] (18 MBps) [2024-11-28T05:10:24.044Z] Copying: 223/1024 [MB] (14 MBps) [2024-11-28T05:10:24.989Z] Copying: 243/1024 [MB] (19 MBps) [2024-11-28T05:10:26.384Z] Copying: 253/1024 [MB] (10 MBps) [2024-11-28T05:10:27.329Z] Copying: 263/1024 [MB] (10 MBps) [2024-11-28T05:10:28.274Z] Copying: 274/1024 [MB] (10 MBps) [2024-11-28T05:10:29.214Z] Copying: 284/1024 [MB] (10 MBps) [2024-11-28T05:10:30.153Z] Copying: 295/1024 [MB] (10 MBps) [2024-11-28T05:10:31.099Z] Copying: 306/1024 [MB] (10 MBps) [2024-11-28T05:10:32.045Z] Copying: 316/1024 [MB] (10 MBps) [2024-11-28T05:10:32.990Z] Copying: 327/1024 [MB] (10 MBps) [2024-11-28T05:10:34.378Z] Copying: 337/1024 [MB] (10 MBps) [2024-11-28T05:10:35.323Z] Copying: 348/1024 [MB] (10 MBps) [2024-11-28T05:10:36.266Z] Copying: 359/1024 [MB] (10 MBps) [2024-11-28T05:10:37.211Z] Copying: 370/1024 [MB] (10 MBps) [2024-11-28T05:10:38.156Z] Copying: 381/1024 [MB] (11 MBps) [2024-11-28T05:10:39.101Z] Copying: 392/1024 [MB] (11 MBps) [2024-11-28T05:10:40.045Z] Copying: 403/1024 [MB] (10 MBps) [2024-11-28T05:10:40.990Z] Copying: 414/1024 [MB] (11 MBps) [2024-11-28T05:10:42.380Z] Copying: 425/1024 [MB] (11 MBps) [2024-11-28T05:10:43.326Z] Copying: 436/1024 [MB] (10 MBps) [2024-11-28T05:10:44.272Z] Copying: 452/1024 [MB] (15 MBps) [2024-11-28T05:10:45.292Z] Copying: 466/1024 [MB] (13 MBps) [2024-11-28T05:10:46.267Z] Copying: 485/1024 [MB] (19 MBps) [2024-11-28T05:10:47.211Z] Copying: 502/1024 [MB] (17 MBps) [2024-11-28T05:10:48.158Z] Copying: 524/1024 [MB] (21 MBps) [2024-11-28T05:10:49.102Z] Copying: 538/1024 [MB] (13 MBps) [2024-11-28T05:10:50.047Z] Copying: 554/1024 [MB] (15 MBps) [2024-11-28T05:10:50.993Z] Copying: 572/1024 [MB] (18 MBps) [2024-11-28T05:10:52.378Z] Copying: 589/1024 [MB] (17 MBps) [2024-11-28T05:10:53.320Z] Copying: 605/1024 [MB] (16 MBps) [2024-11-28T05:10:54.265Z] Copying: 623/1024 [MB] (18 MBps) [2024-11-28T05:10:55.216Z] Copying: 643/1024 [MB] (20 MBps) [2024-11-28T05:10:56.161Z] Copying: 660/1024 [MB] (16 MBps) [2024-11-28T05:10:57.107Z] Copying: 676/1024 [MB] (15 MBps) [2024-11-28T05:10:58.052Z] Copying: 689/1024 [MB] (13 MBps) [2024-11-28T05:10:58.996Z] Copying: 705/1024 [MB] (16 MBps) [2024-11-28T05:11:00.387Z] Copying: 722/1024 [MB] (16 MBps) [2024-11-28T05:11:01.333Z] Copying: 739/1024 [MB] (17 MBps) [2024-11-28T05:11:02.277Z] Copying: 751/1024 [MB] (12 MBps) [2024-11-28T05:11:03.223Z] Copying: 771/1024 [MB] (20 MBps) [2024-11-28T05:11:04.161Z] Copying: 785/1024 [MB] (14 MBps) [2024-11-28T05:11:05.105Z] Copying: 807/1024 [MB] (21 MBps) [2024-11-28T05:11:06.049Z] Copying: 821/1024 [MB] (14 MBps) [2024-11-28T05:11:06.996Z] Copying: 836/1024 [MB] (15 MBps) [2024-11-28T05:11:08.393Z] Copying: 853/1024 [MB] (16 MBps) [2024-11-28T05:11:09.340Z] Copying: 868/1024 [MB] (15 MBps) [2024-11-28T05:11:10.360Z] Copying: 879/1024 [MB] (10 MBps) [2024-11-28T05:11:11.306Z] Copying: 889/1024 [MB] (10 MBps) [2024-11-28T05:11:12.253Z] Copying: 900/1024 [MB] (11 MBps) [2024-11-28T05:11:13.199Z] Copying: 911/1024 [MB] (10 MBps) [2024-11-28T05:11:14.144Z] Copying: 921/1024 [MB] (10 MBps) [2024-11-28T05:11:15.089Z] Copying: 931/1024 [MB] (10 MBps) [2024-11-28T05:11:16.032Z] Copying: 943/1024 [MB] (11 MBps) [2024-11-28T05:11:17.421Z] Copying: 954/1024 [MB] (10 MBps) [2024-11-28T05:11:17.994Z] Copying: 964/1024 [MB] (10 MBps) [2024-11-28T05:11:19.382Z] Copying: 976/1024 [MB] (11 MBps) [2024-11-28T05:11:20.328Z] Copying: 986/1024 [MB] (10 MBps) [2024-11-28T05:11:21.271Z] Copying: 997/1024 [MB] (10 MBps) [2024-11-28T05:11:22.213Z] Copying: 1008/1024 [MB] (11 MBps) [2024-11-28T05:11:22.213Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-28 05:11:22.152030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.929 [2024-11-28 05:11:22.152160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:52.929 [2024-11-28 05:11:22.152262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:52.929 [2024-11-28 05:11:22.152288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.929 [2024-11-28 05:11:22.152353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:52.929 [2024-11-28 05:11:22.153441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.929 [2024-11-28 05:11:22.153500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:52.929 [2024-11-28 05:11:22.153527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:21:52.929 [2024-11-28 05:11:22.153549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.929 [2024-11-28 05:11:22.154256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.929 [2024-11-28 05:11:22.154289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:52.929 [2024-11-28 05:11:22.154321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:21:52.929 [2024-11-28 05:11:22.154344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.929 [2024-11-28 05:11:22.160490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.929 [2024-11-28 05:11:22.160509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:52.929 [2024-11-28 05:11:22.160519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.080 ms 00:21:52.929 [2024-11-28 05:11:22.160533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.929 [2024-11-28 05:11:22.166725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.929 [2024-11-28 05:11:22.166763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:52.929 [2024-11-28 05:11:22.166774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.175 ms 00:21:52.929 [2024-11-28 05:11:22.166789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.929 [2024-11-28 05:11:22.169868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.929 [2024-11-28 05:11:22.169908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:52.929 [2024-11-28 05:11:22.169918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.014 ms 00:21:52.929 [2024-11-28 05:11:22.169926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.929 [2024-11-28 05:11:22.174848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.929 [2024-11-28 05:11:22.174888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:52.929 [2024-11-28 05:11:22.174908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.874 ms 00:21:52.929 [2024-11-28 05:11:22.174916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.929 [2024-11-28 05:11:22.175046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.929 [2024-11-28 05:11:22.175055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:52.929 [2024-11-28 05:11:22.175070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:52.929 [2024-11-28 05:11:22.175083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.929 [2024-11-28 05:11:22.178267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.929 [2024-11-28 05:11:22.178413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:52.929 [2024-11-28 05:11:22.178474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:21:52.929 [2024-11-28 05:11:22.178497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.929 [2024-11-28 05:11:22.181863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.929 [2024-11-28 05:11:22.182015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:52.929 [2024-11-28 05:11:22.182072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.287 ms 00:21:52.929 [2024-11-28 05:11:22.182095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.929 [2024-11-28 05:11:22.184849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.929 [2024-11-28 05:11:22.185000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:52.929 [2024-11-28 05:11:22.185057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:21:52.929 [2024-11-28 05:11:22.185079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.929 [2024-11-28 05:11:22.187666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.929 [2024-11-28 05:11:22.187818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:52.929 [2024-11-28 05:11:22.187873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.425 ms 00:21:52.929 [2024-11-28 05:11:22.187895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.929 [2024-11-28 05:11:22.187938] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:52.929 [2024-11-28 05:11:22.187967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:52.929 [2024-11-28 05:11:22.187998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:52.929 [2024-11-28 05:11:22.188028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:52.929 [2024-11-28 05:11:22.188057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:52.929 [2024-11-28 05:11:22.188133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:52.929 [2024-11-28 05:11:22.188164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:52.929 [2024-11-28 05:11:22.188219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:52.929 [2024-11-28 05:11:22.188249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:52.929 [2024-11-28 05:11:22.188320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:52.929 [2024-11-28 05:11:22.188350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:52.929 [2024-11-28 05:11:22.188398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:52.929 [2024-11-28 05:11:22.188430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.188983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.189998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:52.930 [2024-11-28 05:11:22.190565] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:52.931 [2024-11-28 05:11:22.190574] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ceea9fc-b1d3-4cf8-b052-f90915354471 00:21:52.931 [2024-11-28 05:11:22.190582] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:52.931 [2024-11-28 05:11:22.190590] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:52.931 [2024-11-28 05:11:22.190598] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:52.931 [2024-11-28 05:11:22.190606] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:52.931 [2024-11-28 05:11:22.190614] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:52.931 [2024-11-28 05:11:22.190621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:52.931 [2024-11-28 05:11:22.190634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:52.931 [2024-11-28 05:11:22.190641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:52.931 [2024-11-28 05:11:22.190648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:52.931 [2024-11-28 05:11:22.190656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.931 [2024-11-28 05:11:22.190667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:52.931 [2024-11-28 05:11:22.190676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:21:52.931 [2024-11-28 05:11:22.190684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.931 [2024-11-28 05:11:22.192931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.931 [2024-11-28 05:11:22.193082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:52.931 [2024-11-28 05:11:22.193099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.223 ms 00:21:52.931 [2024-11-28 05:11:22.193114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.931 [2024-11-28 05:11:22.193265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.931 [2024-11-28 05:11:22.193275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:52.931 [2024-11-28 05:11:22.193285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:21:52.931 [2024-11-28 05:11:22.193292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.931 [2024-11-28 05:11:22.200581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.931 [2024-11-28 05:11:22.200617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:52.931 [2024-11-28 05:11:22.200627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.931 [2024-11-28 05:11:22.200641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.931 [2024-11-28 05:11:22.200700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.931 [2024-11-28 05:11:22.200709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:52.931 [2024-11-28 05:11:22.200717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.931 [2024-11-28 05:11:22.200728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.931 [2024-11-28 05:11:22.200791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.931 [2024-11-28 05:11:22.200805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:52.931 [2024-11-28 05:11:22.200813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.931 [2024-11-28 05:11:22.200822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.931 [2024-11-28 05:11:22.200840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.931 [2024-11-28 05:11:22.200849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:52.931 [2024-11-28 05:11:22.200856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.931 [2024-11-28 05:11:22.200864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.193 [2024-11-28 05:11:22.214957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.193 [2024-11-28 05:11:22.214994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:53.193 [2024-11-28 05:11:22.215005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.193 [2024-11-28 05:11:22.215018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.193 [2024-11-28 05:11:22.225867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.193 [2024-11-28 05:11:22.225909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:53.193 [2024-11-28 05:11:22.225921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.193 [2024-11-28 05:11:22.225938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.193 [2024-11-28 05:11:22.225992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.193 [2024-11-28 05:11:22.226002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:53.193 [2024-11-28 05:11:22.226011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.193 [2024-11-28 05:11:22.226019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.193 [2024-11-28 05:11:22.226055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.193 [2024-11-28 05:11:22.226067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:53.193 [2024-11-28 05:11:22.226076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.193 [2024-11-28 05:11:22.226088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.193 [2024-11-28 05:11:22.226167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.193 [2024-11-28 05:11:22.226323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:53.193 [2024-11-28 05:11:22.226332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.193 [2024-11-28 05:11:22.226341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.193 [2024-11-28 05:11:22.226375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.193 [2024-11-28 05:11:22.226388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:53.193 [2024-11-28 05:11:22.226396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.193 [2024-11-28 05:11:22.226407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.193 [2024-11-28 05:11:22.226450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.193 [2024-11-28 05:11:22.226460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:53.193 [2024-11-28 05:11:22.226468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.193 [2024-11-28 05:11:22.226477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.193 [2024-11-28 05:11:22.226526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.193 [2024-11-28 05:11:22.226540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:53.193 [2024-11-28 05:11:22.226550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.193 [2024-11-28 05:11:22.226565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.193 [2024-11-28 05:11:22.226699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.671 ms, result 0 00:21:53.193 00:21:53.193 00:21:53.193 05:11:22 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:55.741 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:55.741 05:11:24 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:55.741 [2024-11-28 05:11:24.752536] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:21:55.742 [2024-11-28 05:11:24.752667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89507 ] 00:21:55.742 [2024-11-28 05:11:24.899282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.742 [2024-11-28 05:11:24.927992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.004 [2024-11-28 05:11:25.044368] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:56.004 [2024-11-28 05:11:25.044461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:56.004 [2024-11-28 05:11:25.205137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.004 [2024-11-28 05:11:25.205220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:56.004 [2024-11-28 05:11:25.205236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:56.004 [2024-11-28 05:11:25.205244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.004 [2024-11-28 05:11:25.205301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.004 [2024-11-28 05:11:25.205316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:56.004 [2024-11-28 05:11:25.205325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:56.004 [2024-11-28 05:11:25.205339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.004 [2024-11-28 05:11:25.205391] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:56.004 [2024-11-28 05:11:25.205665] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:56.004 [2024-11-28 05:11:25.205710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.004 [2024-11-28 05:11:25.205720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:56.004 [2024-11-28 05:11:25.205731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:21:56.004 [2024-11-28 05:11:25.205739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.004 [2024-11-28 05:11:25.207440] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:56.004 [2024-11-28 05:11:25.211037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.004 [2024-11-28 05:11:25.211089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:56.004 [2024-11-28 05:11:25.211109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.598 ms 00:21:56.004 [2024-11-28 05:11:25.211122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.004 [2024-11-28 05:11:25.211207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.004 [2024-11-28 05:11:25.211224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:56.004 [2024-11-28 05:11:25.211235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:56.004 [2024-11-28 05:11:25.211242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.004 [2024-11-28 05:11:25.219120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.004 [2024-11-28 05:11:25.219162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:56.004 [2024-11-28 05:11:25.219201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.830 ms 00:21:56.004 [2024-11-28 05:11:25.219208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.004 [2024-11-28 05:11:25.219297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.004 [2024-11-28 05:11:25.219307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:56.004 [2024-11-28 05:11:25.219316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:56.004 [2024-11-28 05:11:25.219326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.004 [2024-11-28 05:11:25.219380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.004 [2024-11-28 05:11:25.219391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:56.004 [2024-11-28 05:11:25.219400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:56.004 [2024-11-28 05:11:25.219411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.004 [2024-11-28 05:11:25.219439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:56.005 [2024-11-28 05:11:25.221513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.005 [2024-11-28 05:11:25.221549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:56.005 [2024-11-28 05:11:25.221559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:21:56.005 [2024-11-28 05:11:25.221566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.005 [2024-11-28 05:11:25.221600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.005 [2024-11-28 05:11:25.221608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:56.005 [2024-11-28 05:11:25.221616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:56.005 [2024-11-28 05:11:25.221627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.005 [2024-11-28 05:11:25.221650] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:56.005 [2024-11-28 05:11:25.221672] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:56.005 [2024-11-28 05:11:25.221726] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:56.005 [2024-11-28 05:11:25.221746] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:56.005 [2024-11-28 05:11:25.221852] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:56.005 [2024-11-28 05:11:25.221867] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:56.005 [2024-11-28 05:11:25.221880] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:56.005 [2024-11-28 05:11:25.221890] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:56.005 [2024-11-28 05:11:25.221899] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:56.005 [2024-11-28 05:11:25.221911] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:56.005 [2024-11-28 05:11:25.221919] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:56.005 [2024-11-28 05:11:25.221927] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:56.005 [2024-11-28 05:11:25.221934] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:56.005 [2024-11-28 05:11:25.221947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.005 [2024-11-28 05:11:25.221954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:56.005 [2024-11-28 05:11:25.221961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:21:56.005 [2024-11-28 05:11:25.221970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.005 [2024-11-28 05:11:25.222055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.005 [2024-11-28 05:11:25.222063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:56.005 [2024-11-28 05:11:25.222075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:56.005 [2024-11-28 05:11:25.222082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.005 [2024-11-28 05:11:25.222203] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:56.005 [2024-11-28 05:11:25.222216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:56.005 [2024-11-28 05:11:25.222225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:56.005 [2024-11-28 05:11:25.222234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:56.005 [2024-11-28 05:11:25.222257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:56.005 [2024-11-28 05:11:25.222275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:56.005 [2024-11-28 05:11:25.222283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:56.005 [2024-11-28 05:11:25.222302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:56.005 [2024-11-28 05:11:25.222311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:56.005 [2024-11-28 05:11:25.222318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:56.005 [2024-11-28 05:11:25.222325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:56.005 [2024-11-28 05:11:25.222333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:56.005 [2024-11-28 05:11:25.222341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:56.005 [2024-11-28 05:11:25.222357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:56.005 [2024-11-28 05:11:25.222364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:56.005 [2024-11-28 05:11:25.222380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.005 [2024-11-28 05:11:25.222400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:56.005 [2024-11-28 05:11:25.222408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.005 [2024-11-28 05:11:25.222425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:56.005 [2024-11-28 05:11:25.222437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.005 [2024-11-28 05:11:25.222453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:56.005 [2024-11-28 05:11:25.222461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.005 [2024-11-28 05:11:25.222487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:56.005 [2024-11-28 05:11:25.222496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:56.005 [2024-11-28 05:11:25.222511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:56.005 [2024-11-28 05:11:25.222520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:56.005 [2024-11-28 05:11:25.222527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:56.005 [2024-11-28 05:11:25.222535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:56.005 [2024-11-28 05:11:25.222541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:56.005 [2024-11-28 05:11:25.222547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:56.005 [2024-11-28 05:11:25.222560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:56.005 [2024-11-28 05:11:25.222568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222578] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:56.005 [2024-11-28 05:11:25.222587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:56.005 [2024-11-28 05:11:25.222595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:56.005 [2024-11-28 05:11:25.222602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.005 [2024-11-28 05:11:25.222610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:56.005 [2024-11-28 05:11:25.222617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:56.005 [2024-11-28 05:11:25.222623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:56.005 [2024-11-28 05:11:25.222630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:56.005 [2024-11-28 05:11:25.222636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:56.006 [2024-11-28 05:11:25.222642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:56.006 [2024-11-28 05:11:25.222651] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:56.006 [2024-11-28 05:11:25.222661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:56.006 [2024-11-28 05:11:25.222669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:56.006 [2024-11-28 05:11:25.222677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:56.006 [2024-11-28 05:11:25.222685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:56.006 [2024-11-28 05:11:25.222694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:56.006 [2024-11-28 05:11:25.222701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:56.006 [2024-11-28 05:11:25.222708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:56.006 [2024-11-28 05:11:25.222715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:56.006 [2024-11-28 05:11:25.222722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:56.006 [2024-11-28 05:11:25.222729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:56.006 [2024-11-28 05:11:25.222742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:56.006 [2024-11-28 05:11:25.222749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:56.006 [2024-11-28 05:11:25.222757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:56.006 [2024-11-28 05:11:25.222764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:56.006 [2024-11-28 05:11:25.222771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:56.006 [2024-11-28 05:11:25.222778] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:56.006 [2024-11-28 05:11:25.222787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:56.006 [2024-11-28 05:11:25.222796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:56.006 [2024-11-28 05:11:25.222803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:56.006 [2024-11-28 05:11:25.222810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:56.006 [2024-11-28 05:11:25.222819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:56.006 [2024-11-28 05:11:25.222826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.006 [2024-11-28 05:11:25.222834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:56.006 [2024-11-28 05:11:25.222842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:21:56.006 [2024-11-28 05:11:25.222851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.006 [2024-11-28 05:11:25.236592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.006 [2024-11-28 05:11:25.236829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:56.006 [2024-11-28 05:11:25.236849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.697 ms 00:21:56.006 [2024-11-28 05:11:25.236858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.006 [2024-11-28 05:11:25.236952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.006 [2024-11-28 05:11:25.236961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:56.006 [2024-11-28 05:11:25.236970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:56.006 [2024-11-28 05:11:25.236978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.006 [2024-11-28 05:11:25.258472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.006 [2024-11-28 05:11:25.258711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:56.006 [2024-11-28 05:11:25.258738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.434 ms 00:21:56.006 [2024-11-28 05:11:25.258751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.006 [2024-11-28 05:11:25.258814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.006 [2024-11-28 05:11:25.258829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:56.006 [2024-11-28 05:11:25.258854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:56.006 [2024-11-28 05:11:25.258868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.006 [2024-11-28 05:11:25.259531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.006 [2024-11-28 05:11:25.259584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:56.006 [2024-11-28 05:11:25.259606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:21:56.006 [2024-11-28 05:11:25.259618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.006 [2024-11-28 05:11:25.259823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.006 [2024-11-28 05:11:25.259836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:56.006 [2024-11-28 05:11:25.259848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:21:56.006 [2024-11-28 05:11:25.259859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.006 [2024-11-28 05:11:25.268198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.006 [2024-11-28 05:11:25.268245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:56.006 [2024-11-28 05:11:25.268257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.292 ms 00:21:56.006 [2024-11-28 05:11:25.268265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.006 [2024-11-28 05:11:25.272222] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:56.006 [2024-11-28 05:11:25.272273] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:56.006 [2024-11-28 05:11:25.272289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.006 [2024-11-28 05:11:25.272298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:56.006 [2024-11-28 05:11:25.272308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.924 ms 00:21:56.006 [2024-11-28 05:11:25.272317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.267 [2024-11-28 05:11:25.288030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.267 [2024-11-28 05:11:25.288097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:56.267 [2024-11-28 05:11:25.288111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.651 ms 00:21:56.267 [2024-11-28 05:11:25.288119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.267 [2024-11-28 05:11:25.291023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.267 [2024-11-28 05:11:25.291070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:56.267 [2024-11-28 05:11:25.291080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.867 ms 00:21:56.267 [2024-11-28 05:11:25.291088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.267 [2024-11-28 05:11:25.293676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.267 [2024-11-28 05:11:25.293859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:56.267 [2024-11-28 05:11:25.293877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:21:56.267 [2024-11-28 05:11:25.293885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.267 [2024-11-28 05:11:25.294249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.267 [2024-11-28 05:11:25.294265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:56.267 [2024-11-28 05:11:25.294274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:21:56.267 [2024-11-28 05:11:25.294288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.267 [2024-11-28 05:11:25.318420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.267 [2024-11-28 05:11:25.318482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:56.267 [2024-11-28 05:11:25.318496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.106 ms 00:21:56.267 [2024-11-28 05:11:25.318505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.267 [2024-11-28 05:11:25.326627] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:56.267 [2024-11-28 05:11:25.329573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.267 [2024-11-28 05:11:25.329613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:56.267 [2024-11-28 05:11:25.329625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.019 ms 00:21:56.268 [2024-11-28 05:11:25.329640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.268 [2024-11-28 05:11:25.329726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.268 [2024-11-28 05:11:25.329738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:56.268 [2024-11-28 05:11:25.329759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:56.268 [2024-11-28 05:11:25.329771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.268 [2024-11-28 05:11:25.329842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.268 [2024-11-28 05:11:25.329856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:56.268 [2024-11-28 05:11:25.329865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:56.268 [2024-11-28 05:11:25.329873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.268 [2024-11-28 05:11:25.329897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.268 [2024-11-28 05:11:25.329910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:56.268 [2024-11-28 05:11:25.329919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:56.268 [2024-11-28 05:11:25.329927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.268 [2024-11-28 05:11:25.329969] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:56.268 [2024-11-28 05:11:25.329980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.268 [2024-11-28 05:11:25.329988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:56.268 [2024-11-28 05:11:25.330000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:56.268 [2024-11-28 05:11:25.330008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.268 [2024-11-28 05:11:25.335296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.268 [2024-11-28 05:11:25.335353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:56.268 [2024-11-28 05:11:25.335364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.269 ms 00:21:56.268 [2024-11-28 05:11:25.335372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.268 [2024-11-28 05:11:25.335457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.268 [2024-11-28 05:11:25.335467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:56.268 [2024-11-28 05:11:25.335483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:56.268 [2024-11-28 05:11:25.335495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.268 [2024-11-28 05:11:25.336651] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.018 ms, result 0 00:21:57.210  [2024-11-28T05:11:27.438Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-28T05:11:28.384Z] Copying: 23/1024 [MB] (13 MBps) [2024-11-28T05:11:29.769Z] Copying: 33/1024 [MB] (10 MBps) [2024-11-28T05:11:30.715Z] Copying: 52/1024 [MB] (19 MBps) [2024-11-28T05:11:31.657Z] Copying: 69/1024 [MB] (17 MBps) [2024-11-28T05:11:32.601Z] Copying: 86/1024 [MB] (16 MBps) [2024-11-28T05:11:33.547Z] Copying: 99/1024 [MB] (13 MBps) [2024-11-28T05:11:34.492Z] Copying: 115/1024 [MB] (15 MBps) [2024-11-28T05:11:35.437Z] Copying: 128/1024 [MB] (13 MBps) [2024-11-28T05:11:36.380Z] Copying: 141/1024 [MB] (13 MBps) [2024-11-28T05:11:37.768Z] Copying: 151/1024 [MB] (10 MBps) [2024-11-28T05:11:38.712Z] Copying: 169/1024 [MB] (17 MBps) [2024-11-28T05:11:39.656Z] Copying: 182/1024 [MB] (13 MBps) [2024-11-28T05:11:40.601Z] Copying: 195/1024 [MB] (13 MBps) [2024-11-28T05:11:41.546Z] Copying: 207/1024 [MB] (12 MBps) [2024-11-28T05:11:42.490Z] Copying: 218/1024 [MB] (10 MBps) [2024-11-28T05:11:43.432Z] Copying: 232/1024 [MB] (14 MBps) [2024-11-28T05:11:44.377Z] Copying: 249/1024 [MB] (16 MBps) [2024-11-28T05:11:45.412Z] Copying: 260/1024 [MB] (11 MBps) [2024-11-28T05:11:46.354Z] Copying: 275/1024 [MB] (14 MBps) [2024-11-28T05:11:47.745Z] Copying: 288/1024 [MB] (13 MBps) [2024-11-28T05:11:48.689Z] Copying: 309/1024 [MB] (21 MBps) [2024-11-28T05:11:49.627Z] Copying: 321/1024 [MB] (12 MBps) [2024-11-28T05:11:50.569Z] Copying: 338/1024 [MB] (17 MBps) [2024-11-28T05:11:51.507Z] Copying: 353/1024 [MB] (15 MBps) [2024-11-28T05:11:52.450Z] Copying: 374/1024 [MB] (20 MBps) [2024-11-28T05:11:53.395Z] Copying: 397/1024 [MB] (23 MBps) [2024-11-28T05:11:54.769Z] Copying: 410/1024 [MB] (13 MBps) [2024-11-28T05:11:55.709Z] Copying: 448/1024 [MB] (38 MBps) [2024-11-28T05:11:56.643Z] Copying: 476/1024 [MB] (27 MBps) [2024-11-28T05:11:57.578Z] Copying: 491/1024 [MB] (15 MBps) [2024-11-28T05:11:58.521Z] Copying: 514/1024 [MB] (22 MBps) [2024-11-28T05:11:59.462Z] Copying: 531/1024 [MB] (17 MBps) [2024-11-28T05:12:00.403Z] Copying: 547/1024 [MB] (15 MBps) [2024-11-28T05:12:01.787Z] Copying: 564/1024 [MB] (17 MBps) [2024-11-28T05:12:02.360Z] Copying: 584/1024 [MB] (19 MBps) [2024-11-28T05:12:03.749Z] Copying: 603/1024 [MB] (19 MBps) [2024-11-28T05:12:04.692Z] Copying: 624/1024 [MB] (20 MBps) [2024-11-28T05:12:05.633Z] Copying: 648/1024 [MB] (23 MBps) [2024-11-28T05:12:06.575Z] Copying: 667/1024 [MB] (18 MBps) [2024-11-28T05:12:07.520Z] Copying: 684/1024 [MB] (16 MBps) [2024-11-28T05:12:08.462Z] Copying: 705/1024 [MB] (21 MBps) [2024-11-28T05:12:09.434Z] Copying: 725/1024 [MB] (20 MBps) [2024-11-28T05:12:10.375Z] Copying: 745/1024 [MB] (19 MBps) [2024-11-28T05:12:11.764Z] Copying: 766/1024 [MB] (20 MBps) [2024-11-28T05:12:12.709Z] Copying: 783/1024 [MB] (17 MBps) [2024-11-28T05:12:13.651Z] Copying: 801/1024 [MB] (17 MBps) [2024-11-28T05:12:14.595Z] Copying: 819/1024 [MB] (18 MBps) [2024-11-28T05:12:15.556Z] Copying: 837/1024 [MB] (18 MBps) [2024-11-28T05:12:16.502Z] Copying: 854/1024 [MB] (16 MBps) [2024-11-28T05:12:17.444Z] Copying: 869/1024 [MB] (15 MBps) [2024-11-28T05:12:18.388Z] Copying: 880/1024 [MB] (11 MBps) [2024-11-28T05:12:19.776Z] Copying: 891/1024 [MB] (10 MBps) [2024-11-28T05:12:20.350Z] Copying: 902/1024 [MB] (11 MBps) [2024-11-28T05:12:21.737Z] Copying: 913/1024 [MB] (11 MBps) [2024-11-28T05:12:22.684Z] Copying: 925/1024 [MB] (11 MBps) [2024-11-28T05:12:23.629Z] Copying: 949/1024 [MB] (24 MBps) [2024-11-28T05:12:24.574Z] Copying: 961/1024 [MB] (11 MBps) [2024-11-28T05:12:25.518Z] Copying: 972/1024 [MB] (10 MBps) [2024-11-28T05:12:26.463Z] Copying: 983/1024 [MB] (11 MBps) [2024-11-28T05:12:27.410Z] Copying: 994/1024 [MB] (11 MBps) [2024-11-28T05:12:28.357Z] Copying: 1005/1024 [MB] (11 MBps) [2024-11-28T05:12:29.747Z] Copying: 1019/1024 [MB] (13 MBps) [2024-11-28T05:12:29.747Z] Copying: 1048340/1048576 [kB] (4136 kBps) [2024-11-28T05:12:29.747Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-28 05:12:29.614221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-28 05:12:29.614594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:00.463 [2024-11-28 05:12:29.614622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:00.463 [2024-11-28 05:12:29.614632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-28 05:12:29.615811] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:00.463 [2024-11-28 05:12:29.617510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-28 05:12:29.617564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:00.463 [2024-11-28 05:12:29.617585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.668 ms 00:23:00.463 [2024-11-28 05:12:29.617595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-28 05:12:29.630933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-28 05:12:29.630981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:00.463 [2024-11-28 05:12:29.630995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.737 ms 00:23:00.463 [2024-11-28 05:12:29.631004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-28 05:12:29.654036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-28 05:12:29.654089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:00.463 [2024-11-28 05:12:29.654113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.012 ms 00:23:00.463 [2024-11-28 05:12:29.654126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-28 05:12:29.660266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-28 05:12:29.660436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:00.463 [2024-11-28 05:12:29.660455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.100 ms 00:23:00.463 [2024-11-28 05:12:29.660464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-28 05:12:29.663194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-28 05:12:29.663238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:00.463 [2024-11-28 05:12:29.663248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.666 ms 00:23:00.463 [2024-11-28 05:12:29.663256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-28 05:12:29.667950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-28 05:12:29.668113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:00.463 [2024-11-28 05:12:29.668132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.652 ms 00:23:00.463 [2024-11-28 05:12:29.668148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.727 [2024-11-28 05:12:29.965287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.727 [2024-11-28 05:12:29.965354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:00.727 [2024-11-28 05:12:29.965368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 296.831 ms 00:23:00.727 [2024-11-28 05:12:29.965378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.727 [2024-11-28 05:12:29.968693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.727 [2024-11-28 05:12:29.968740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:00.727 [2024-11-28 05:12:29.968751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:23:00.727 [2024-11-28 05:12:29.968758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.727 [2024-11-28 05:12:29.970761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.727 [2024-11-28 05:12:29.970933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:00.727 [2024-11-28 05:12:29.970961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.961 ms 00:23:00.727 [2024-11-28 05:12:29.970969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.727 [2024-11-28 05:12:29.972585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.727 [2024-11-28 05:12:29.972631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:00.727 [2024-11-28 05:12:29.972642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:23:00.727 [2024-11-28 05:12:29.972649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.727 [2024-11-28 05:12:29.974207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.727 [2024-11-28 05:12:29.974249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:00.727 [2024-11-28 05:12:29.974258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.463 ms 00:23:00.727 [2024-11-28 05:12:29.974266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.727 [2024-11-28 05:12:29.974305] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:00.727 [2024-11-28 05:12:29.974320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105216 / 261120 wr_cnt: 1 state: open 00:23:00.727 [2024-11-28 05:12:29.974332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:00.727 [2024-11-28 05:12:29.974504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.974995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:00.728 [2024-11-28 05:12:29.975135] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:00.728 [2024-11-28 05:12:29.975143] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ceea9fc-b1d3-4cf8-b052-f90915354471 00:23:00.728 [2024-11-28 05:12:29.975152] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105216 00:23:00.728 [2024-11-28 05:12:29.975164] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106176 00:23:00.728 [2024-11-28 05:12:29.975187] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105216 00:23:00.728 [2024-11-28 05:12:29.975209] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:23:00.728 [2024-11-28 05:12:29.975217] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:00.728 [2024-11-28 05:12:29.975228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:00.728 [2024-11-28 05:12:29.975236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:00.728 [2024-11-28 05:12:29.975243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:00.728 [2024-11-28 05:12:29.975250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:00.728 [2024-11-28 05:12:29.975257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.728 [2024-11-28 05:12:29.975270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:00.728 [2024-11-28 05:12:29.975279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:23:00.728 [2024-11-28 05:12:29.975287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.728 [2024-11-28 05:12:29.977525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.729 [2024-11-28 05:12:29.977694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:00.729 [2024-11-28 05:12:29.977712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.219 ms 00:23:00.729 [2024-11-28 05:12:29.977722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.729 [2024-11-28 05:12:29.977849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.729 [2024-11-28 05:12:29.977861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:00.729 [2024-11-28 05:12:29.977871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:23:00.729 [2024-11-28 05:12:29.977881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.729 [2024-11-28 05:12:29.985165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.729 [2024-11-28 05:12:29.985245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:00.729 [2024-11-28 05:12:29.985263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.729 [2024-11-28 05:12:29.985271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.729 [2024-11-28 05:12:29.985334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.729 [2024-11-28 05:12:29.985343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:00.729 [2024-11-28 05:12:29.985352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.729 [2024-11-28 05:12:29.985363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.729 [2024-11-28 05:12:29.985425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.729 [2024-11-28 05:12:29.985435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:00.729 [2024-11-28 05:12:29.985443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.729 [2024-11-28 05:12:29.985451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.729 [2024-11-28 05:12:29.985466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.729 [2024-11-28 05:12:29.985474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:00.729 [2024-11-28 05:12:29.985482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.729 [2024-11-28 05:12:29.985491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.729 [2024-11-28 05:12:29.999347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.729 [2024-11-28 05:12:29.999398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:00.729 [2024-11-28 05:12:29.999410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.729 [2024-11-28 05:12:29.999419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.990 [2024-11-28 05:12:30.010194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.990 [2024-11-28 05:12:30.010240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:00.990 [2024-11-28 05:12:30.010252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.990 [2024-11-28 05:12:30.010260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.990 [2024-11-28 05:12:30.010316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.990 [2024-11-28 05:12:30.010335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:00.990 [2024-11-28 05:12:30.010343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.990 [2024-11-28 05:12:30.010353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.990 [2024-11-28 05:12:30.010392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.990 [2024-11-28 05:12:30.010405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:00.990 [2024-11-28 05:12:30.010414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.990 [2024-11-28 05:12:30.010422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.990 [2024-11-28 05:12:30.010500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.990 [2024-11-28 05:12:30.010515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:00.990 [2024-11-28 05:12:30.010524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.990 [2024-11-28 05:12:30.010531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.990 [2024-11-28 05:12:30.010565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.990 [2024-11-28 05:12:30.010575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:00.990 [2024-11-28 05:12:30.010587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.990 [2024-11-28 05:12:30.010595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.990 [2024-11-28 05:12:30.010634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.990 [2024-11-28 05:12:30.010646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:00.990 [2024-11-28 05:12:30.010658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.990 [2024-11-28 05:12:30.010666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.990 [2024-11-28 05:12:30.010710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:00.990 [2024-11-28 05:12:30.010721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:00.990 [2024-11-28 05:12:30.010729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:00.990 [2024-11-28 05:12:30.010744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.990 [2024-11-28 05:12:30.010880] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 400.265 ms, result 0 00:23:01.934 00:23:01.934 00:23:01.934 05:12:30 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:01.934 [2024-11-28 05:12:31.039370] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:23:01.934 [2024-11-28 05:12:31.040328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90193 ] 00:23:01.934 [2024-11-28 05:12:31.187858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.934 [2024-11-28 05:12:31.211006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.196 [2024-11-28 05:12:31.327524] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.196 [2024-11-28 05:12:31.327609] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.459 [2024-11-28 05:12:31.488532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.459 [2024-11-28 05:12:31.488593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:02.459 [2024-11-28 05:12:31.488608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:02.459 [2024-11-28 05:12:31.488620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.459 [2024-11-28 05:12:31.488680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.459 [2024-11-28 05:12:31.488691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.459 [2024-11-28 05:12:31.488701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:02.459 [2024-11-28 05:12:31.488714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.459 [2024-11-28 05:12:31.488741] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:02.459 [2024-11-28 05:12:31.489019] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:02.459 [2024-11-28 05:12:31.489036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.459 [2024-11-28 05:12:31.489045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.459 [2024-11-28 05:12:31.489056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:23:02.459 [2024-11-28 05:12:31.489064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.459 [2024-11-28 05:12:31.490881] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:02.459 [2024-11-28 05:12:31.494769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.459 [2024-11-28 05:12:31.494818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:02.459 [2024-11-28 05:12:31.494830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.899 ms 00:23:02.459 [2024-11-28 05:12:31.494846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.459 [2024-11-28 05:12:31.494917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.459 [2024-11-28 05:12:31.494927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:02.459 [2024-11-28 05:12:31.494937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:02.459 [2024-11-28 05:12:31.494944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.459 [2024-11-28 05:12:31.502843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.459 [2024-11-28 05:12:31.502885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.459 [2024-11-28 05:12:31.502898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.857 ms 00:23:02.459 [2024-11-28 05:12:31.502906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.459 [2024-11-28 05:12:31.503002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.459 [2024-11-28 05:12:31.503011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.459 [2024-11-28 05:12:31.503019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:02.459 [2024-11-28 05:12:31.503027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.459 [2024-11-28 05:12:31.503085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.459 [2024-11-28 05:12:31.503100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:02.459 [2024-11-28 05:12:31.503109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:02.459 [2024-11-28 05:12:31.503124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.459 [2024-11-28 05:12:31.503150] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:02.459 [2024-11-28 05:12:31.505139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.459 [2024-11-28 05:12:31.505167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.459 [2024-11-28 05:12:31.505217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:23:02.459 [2024-11-28 05:12:31.505225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.459 [2024-11-28 05:12:31.505258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.459 [2024-11-28 05:12:31.505271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:02.459 [2024-11-28 05:12:31.505280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:02.459 [2024-11-28 05:12:31.505290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.459 [2024-11-28 05:12:31.505312] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:02.460 [2024-11-28 05:12:31.505333] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:02.460 [2024-11-28 05:12:31.505385] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:02.460 [2024-11-28 05:12:31.505405] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:02.460 [2024-11-28 05:12:31.505512] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:02.460 [2024-11-28 05:12:31.505523] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:02.460 [2024-11-28 05:12:31.505537] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:02.460 [2024-11-28 05:12:31.505550] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:02.460 [2024-11-28 05:12:31.505559] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:02.460 [2024-11-28 05:12:31.505568] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:02.460 [2024-11-28 05:12:31.505575] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:02.460 [2024-11-28 05:12:31.505584] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:02.460 [2024-11-28 05:12:31.505591] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:02.460 [2024-11-28 05:12:31.505599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.460 [2024-11-28 05:12:31.505609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:02.460 [2024-11-28 05:12:31.505617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:23:02.460 [2024-11-28 05:12:31.505623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.460 [2024-11-28 05:12:31.505735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.460 [2024-11-28 05:12:31.505744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:02.460 [2024-11-28 05:12:31.505752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:02.460 [2024-11-28 05:12:31.505758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.460 [2024-11-28 05:12:31.505862] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:02.460 [2024-11-28 05:12:31.505879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:02.460 [2024-11-28 05:12:31.505888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.460 [2024-11-28 05:12:31.505897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.460 [2024-11-28 05:12:31.505907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:02.460 [2024-11-28 05:12:31.505915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:02.460 [2024-11-28 05:12:31.505924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:02.460 [2024-11-28 05:12:31.505935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:02.460 [2024-11-28 05:12:31.505945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:02.460 [2024-11-28 05:12:31.505952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.460 [2024-11-28 05:12:31.505960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:02.460 [2024-11-28 05:12:31.505968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:02.460 [2024-11-28 05:12:31.505976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.460 [2024-11-28 05:12:31.505984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:02.460 [2024-11-28 05:12:31.505992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:02.460 [2024-11-28 05:12:31.506000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.460 [2024-11-28 05:12:31.506009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:02.460 [2024-11-28 05:12:31.506024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:02.460 [2024-11-28 05:12:31.506032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.460 [2024-11-28 05:12:31.506040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:02.460 [2024-11-28 05:12:31.506048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:02.460 [2024-11-28 05:12:31.506056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.460 [2024-11-28 05:12:31.506064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:02.460 [2024-11-28 05:12:31.506071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:02.460 [2024-11-28 05:12:31.506079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.460 [2024-11-28 05:12:31.506086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:02.460 [2024-11-28 05:12:31.506094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:02.460 [2024-11-28 05:12:31.506102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.460 [2024-11-28 05:12:31.506111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:02.460 [2024-11-28 05:12:31.506120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:02.460 [2024-11-28 05:12:31.506127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.460 [2024-11-28 05:12:31.506135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:02.460 [2024-11-28 05:12:31.506143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:02.460 [2024-11-28 05:12:31.506152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.460 [2024-11-28 05:12:31.506160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:02.460 [2024-11-28 05:12:31.506167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:02.460 [2024-11-28 05:12:31.506173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.460 [2024-11-28 05:12:31.506198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:02.460 [2024-11-28 05:12:31.506205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:02.460 [2024-11-28 05:12:31.506213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.460 [2024-11-28 05:12:31.506220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:02.460 [2024-11-28 05:12:31.506226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:02.460 [2024-11-28 05:12:31.506233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.460 [2024-11-28 05:12:31.506240] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:02.460 [2024-11-28 05:12:31.506251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:02.460 [2024-11-28 05:12:31.506259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.460 [2024-11-28 05:12:31.506266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.460 [2024-11-28 05:12:31.506274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:02.460 [2024-11-28 05:12:31.506283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:02.460 [2024-11-28 05:12:31.506292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:02.460 [2024-11-28 05:12:31.506300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:02.460 [2024-11-28 05:12:31.506306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:02.460 [2024-11-28 05:12:31.506313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:02.460 [2024-11-28 05:12:31.506321] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:02.460 [2024-11-28 05:12:31.506332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.460 [2024-11-28 05:12:31.506344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:02.460 [2024-11-28 05:12:31.506351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:02.460 [2024-11-28 05:12:31.506359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:02.460 [2024-11-28 05:12:31.506366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:02.460 [2024-11-28 05:12:31.506374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:02.460 [2024-11-28 05:12:31.506381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:02.461 [2024-11-28 05:12:31.506389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:02.461 [2024-11-28 05:12:31.506396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:02.461 [2024-11-28 05:12:31.506403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:02.461 [2024-11-28 05:12:31.506416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:02.461 [2024-11-28 05:12:31.506425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:02.461 [2024-11-28 05:12:31.506432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:02.461 [2024-11-28 05:12:31.506439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:02.461 [2024-11-28 05:12:31.506446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:02.461 [2024-11-28 05:12:31.506453] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:02.461 [2024-11-28 05:12:31.506461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.461 [2024-11-28 05:12:31.506471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:02.461 [2024-11-28 05:12:31.506479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:02.461 [2024-11-28 05:12:31.506486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:02.461 [2024-11-28 05:12:31.506493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:02.461 [2024-11-28 05:12:31.506501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.506509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:02.461 [2024-11-28 05:12:31.506517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:23:02.461 [2024-11-28 05:12:31.506527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.520238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.520434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.461 [2024-11-28 05:12:31.520462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.667 ms 00:23:02.461 [2024-11-28 05:12:31.520472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.520564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.520573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:02.461 [2024-11-28 05:12:31.520583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:02.461 [2024-11-28 05:12:31.520597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.538743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.538781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.461 [2024-11-28 05:12:31.538793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.086 ms 00:23:02.461 [2024-11-28 05:12:31.538800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.538838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.538847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.461 [2024-11-28 05:12:31.538855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:02.461 [2024-11-28 05:12:31.538862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.539226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.539251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.461 [2024-11-28 05:12:31.539260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:23:02.461 [2024-11-28 05:12:31.539267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.539387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.539403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.461 [2024-11-28 05:12:31.539412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:02.461 [2024-11-28 05:12:31.539420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.544400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.544430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.461 [2024-11-28 05:12:31.544439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.960 ms 00:23:02.461 [2024-11-28 05:12:31.544446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.546659] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:02.461 [2024-11-28 05:12:31.546692] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:02.461 [2024-11-28 05:12:31.546707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.546715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:02.461 [2024-11-28 05:12:31.546723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.179 ms 00:23:02.461 [2024-11-28 05:12:31.546730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.561649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.561777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:02.461 [2024-11-28 05:12:31.561794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.883 ms 00:23:02.461 [2024-11-28 05:12:31.561809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.563802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.563835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:02.461 [2024-11-28 05:12:31.563844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.959 ms 00:23:02.461 [2024-11-28 05:12:31.563851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.565553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.565581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:02.461 [2024-11-28 05:12:31.565590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:23:02.461 [2024-11-28 05:12:31.565597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.565914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.565925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:02.461 [2024-11-28 05:12:31.565934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:23:02.461 [2024-11-28 05:12:31.565941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.582009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.582058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:02.461 [2024-11-28 05:12:31.582070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.048 ms 00:23:02.461 [2024-11-28 05:12:31.582077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.589530] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:02.461 [2024-11-28 05:12:31.591804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.591836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:02.461 [2024-11-28 05:12:31.591847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.690 ms 00:23:02.461 [2024-11-28 05:12:31.591855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.591926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.461 [2024-11-28 05:12:31.591937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:02.461 [2024-11-28 05:12:31.591947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:02.461 [2024-11-28 05:12:31.591961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.461 [2024-11-28 05:12:31.593369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.462 [2024-11-28 05:12:31.593400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:02.462 [2024-11-28 05:12:31.593409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.371 ms 00:23:02.462 [2024-11-28 05:12:31.593417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.462 [2024-11-28 05:12:31.593443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.462 [2024-11-28 05:12:31.593451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:02.462 [2024-11-28 05:12:31.593459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:02.462 [2024-11-28 05:12:31.593466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.462 [2024-11-28 05:12:31.593498] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:02.462 [2024-11-28 05:12:31.593508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.462 [2024-11-28 05:12:31.593515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:02.462 [2024-11-28 05:12:31.593527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:02.462 [2024-11-28 05:12:31.593535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.462 [2024-11-28 05:12:31.597725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.462 [2024-11-28 05:12:31.597765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:02.462 [2024-11-28 05:12:31.597777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.172 ms 00:23:02.462 [2024-11-28 05:12:31.597785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.462 [2024-11-28 05:12:31.597858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.462 [2024-11-28 05:12:31.597868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:02.462 [2024-11-28 05:12:31.597876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:02.462 [2024-11-28 05:12:31.597891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.462 [2024-11-28 05:12:31.598856] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.882 ms, result 0 00:23:03.852  [2024-11-28T05:12:34.084Z] Copying: 9472/1048576 [kB] (9472 kBps) [2024-11-28T05:12:35.028Z] Copying: 24/1024 [MB] (15 MBps) [2024-11-28T05:12:35.971Z] Copying: 46/1024 [MB] (22 MBps) [2024-11-28T05:12:36.915Z] Copying: 60/1024 [MB] (14 MBps) [2024-11-28T05:12:37.858Z] Copying: 77/1024 [MB] (16 MBps) [2024-11-28T05:12:38.803Z] Copying: 90/1024 [MB] (13 MBps) [2024-11-28T05:12:40.187Z] Copying: 108/1024 [MB] (17 MBps) [2024-11-28T05:12:41.126Z] Copying: 126/1024 [MB] (18 MBps) [2024-11-28T05:12:42.070Z] Copying: 146/1024 [MB] (20 MBps) [2024-11-28T05:12:43.012Z] Copying: 160/1024 [MB] (13 MBps) [2024-11-28T05:12:43.954Z] Copying: 179/1024 [MB] (18 MBps) [2024-11-28T05:12:44.900Z] Copying: 197/1024 [MB] (18 MBps) [2024-11-28T05:12:45.846Z] Copying: 216/1024 [MB] (18 MBps) [2024-11-28T05:12:46.872Z] Copying: 237/1024 [MB] (21 MBps) [2024-11-28T05:12:47.834Z] Copying: 254/1024 [MB] (16 MBps) [2024-11-28T05:12:49.221Z] Copying: 275/1024 [MB] (21 MBps) [2024-11-28T05:12:49.790Z] Copying: 297/1024 [MB] (21 MBps) [2024-11-28T05:12:51.191Z] Copying: 317/1024 [MB] (20 MBps) [2024-11-28T05:12:52.143Z] Copying: 340/1024 [MB] (22 MBps) [2024-11-28T05:12:53.084Z] Copying: 364/1024 [MB] (24 MBps) [2024-11-28T05:12:54.028Z] Copying: 378/1024 [MB] (13 MBps) [2024-11-28T05:12:54.973Z] Copying: 398/1024 [MB] (19 MBps) [2024-11-28T05:12:55.916Z] Copying: 415/1024 [MB] (17 MBps) [2024-11-28T05:12:56.852Z] Copying: 431/1024 [MB] (16 MBps) [2024-11-28T05:12:57.795Z] Copying: 450/1024 [MB] (18 MBps) [2024-11-28T05:12:59.184Z] Copying: 471/1024 [MB] (21 MBps) [2024-11-28T05:13:00.126Z] Copying: 484/1024 [MB] (13 MBps) [2024-11-28T05:13:01.067Z] Copying: 495/1024 [MB] (10 MBps) [2024-11-28T05:13:02.012Z] Copying: 509/1024 [MB] (14 MBps) [2024-11-28T05:13:02.955Z] Copying: 523/1024 [MB] (14 MBps) [2024-11-28T05:13:03.898Z] Copying: 534/1024 [MB] (10 MBps) [2024-11-28T05:13:04.841Z] Copying: 545/1024 [MB] (10 MBps) [2024-11-28T05:13:05.785Z] Copying: 555/1024 [MB] (10 MBps) [2024-11-28T05:13:07.177Z] Copying: 565/1024 [MB] (10 MBps) [2024-11-28T05:13:08.122Z] Copying: 578/1024 [MB] (12 MBps) [2024-11-28T05:13:09.067Z] Copying: 589/1024 [MB] (10 MBps) [2024-11-28T05:13:10.013Z] Copying: 604/1024 [MB] (14 MBps) [2024-11-28T05:13:10.956Z] Copying: 615/1024 [MB] (11 MBps) [2024-11-28T05:13:11.902Z] Copying: 626/1024 [MB] (10 MBps) [2024-11-28T05:13:12.848Z] Copying: 639/1024 [MB] (12 MBps) [2024-11-28T05:13:13.793Z] Copying: 655/1024 [MB] (16 MBps) [2024-11-28T05:13:15.184Z] Copying: 672/1024 [MB] (16 MBps) [2024-11-28T05:13:16.133Z] Copying: 686/1024 [MB] (14 MBps) [2024-11-28T05:13:17.078Z] Copying: 699/1024 [MB] (12 MBps) [2024-11-28T05:13:18.022Z] Copying: 715/1024 [MB] (16 MBps) [2024-11-28T05:13:19.002Z] Copying: 729/1024 [MB] (13 MBps) [2024-11-28T05:13:19.948Z] Copying: 749/1024 [MB] (20 MBps) [2024-11-28T05:13:20.890Z] Copying: 766/1024 [MB] (17 MBps) [2024-11-28T05:13:21.834Z] Copying: 777/1024 [MB] (10 MBps) [2024-11-28T05:13:23.223Z] Copying: 792/1024 [MB] (14 MBps) [2024-11-28T05:13:23.795Z] Copying: 808/1024 [MB] (15 MBps) [2024-11-28T05:13:25.181Z] Copying: 818/1024 [MB] (10 MBps) [2024-11-28T05:13:26.122Z] Copying: 838/1024 [MB] (19 MBps) [2024-11-28T05:13:27.061Z] Copying: 856/1024 [MB] (18 MBps) [2024-11-28T05:13:28.001Z] Copying: 876/1024 [MB] (19 MBps) [2024-11-28T05:13:28.943Z] Copying: 898/1024 [MB] (21 MBps) [2024-11-28T05:13:29.889Z] Copying: 915/1024 [MB] (17 MBps) [2024-11-28T05:13:30.834Z] Copying: 928/1024 [MB] (13 MBps) [2024-11-28T05:13:32.221Z] Copying: 940/1024 [MB] (11 MBps) [2024-11-28T05:13:32.793Z] Copying: 950/1024 [MB] (10 MBps) [2024-11-28T05:13:34.181Z] Copying: 961/1024 [MB] (10 MBps) [2024-11-28T05:13:35.127Z] Copying: 972/1024 [MB] (10 MBps) [2024-11-28T05:13:36.068Z] Copying: 983/1024 [MB] (11 MBps) [2024-11-28T05:13:37.013Z] Copying: 1003/1024 [MB] (19 MBps) [2024-11-28T05:13:37.013Z] Copying: 1022/1024 [MB] (19 MBps) [2024-11-28T05:13:37.275Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-28 05:13:37.208196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.991 [2024-11-28 05:13:37.208276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:07.991 [2024-11-28 05:13:37.208294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:07.991 [2024-11-28 05:13:37.208311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.991 [2024-11-28 05:13:37.208341] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:07.991 [2024-11-28 05:13:37.208965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.991 [2024-11-28 05:13:37.208995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:07.991 [2024-11-28 05:13:37.209011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:24:07.991 [2024-11-28 05:13:37.209022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.991 [2024-11-28 05:13:37.209325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.991 [2024-11-28 05:13:37.209339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:07.991 [2024-11-28 05:13:37.209349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:24:07.991 [2024-11-28 05:13:37.209360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.991 [2024-11-28 05:13:37.215601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.991 [2024-11-28 05:13:37.215640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:07.991 [2024-11-28 05:13:37.215653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.222 ms 00:24:07.991 [2024-11-28 05:13:37.215663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.991 [2024-11-28 05:13:37.222500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.991 [2024-11-28 05:13:37.222527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:07.991 [2024-11-28 05:13:37.222536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.793 ms 00:24:07.991 [2024-11-28 05:13:37.222548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.991 [2024-11-28 05:13:37.224695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.991 [2024-11-28 05:13:37.224725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:07.991 [2024-11-28 05:13:37.224733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.101 ms 00:24:07.991 [2024-11-28 05:13:37.224739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.991 [2024-11-28 05:13:37.228775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.991 [2024-11-28 05:13:37.228803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:07.991 [2024-11-28 05:13:37.228811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.010 ms 00:24:07.991 [2024-11-28 05:13:37.228825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.253 [2024-11-28 05:13:37.490979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.253 [2024-11-28 05:13:37.491008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:08.253 [2024-11-28 05:13:37.491018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 262.125 ms 00:24:08.253 [2024-11-28 05:13:37.491024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.253 [2024-11-28 05:13:37.492948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.253 [2024-11-28 05:13:37.492974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:08.253 [2024-11-28 05:13:37.492981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:24:08.253 [2024-11-28 05:13:37.492987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.253 [2024-11-28 05:13:37.494369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.253 [2024-11-28 05:13:37.494393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:08.253 [2024-11-28 05:13:37.494400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.359 ms 00:24:08.253 [2024-11-28 05:13:37.494406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.253 [2024-11-28 05:13:37.495560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.253 [2024-11-28 05:13:37.495586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:08.253 [2024-11-28 05:13:37.495592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:24:08.253 [2024-11-28 05:13:37.495598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.253 [2024-11-28 05:13:37.496565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.253 [2024-11-28 05:13:37.496589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:08.253 [2024-11-28 05:13:37.496597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:24:08.253 [2024-11-28 05:13:37.496602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.253 [2024-11-28 05:13:37.496624] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:08.253 [2024-11-28 05:13:37.496635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:08.253 [2024-11-28 05:13:37.496643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:08.253 [2024-11-28 05:13:37.496778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.496994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:08.254 [2024-11-28 05:13:37.497297] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:08.254 [2024-11-28 05:13:37.497304] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ceea9fc-b1d3-4cf8-b052-f90915354471 00:24:08.254 [2024-11-28 05:13:37.497311] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:08.254 [2024-11-28 05:13:37.497320] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 26816 00:24:08.254 [2024-11-28 05:13:37.497331] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 25856 00:24:08.254 [2024-11-28 05:13:37.497338] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0371 00:24:08.254 [2024-11-28 05:13:37.497344] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:08.254 [2024-11-28 05:13:37.497353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:08.254 [2024-11-28 05:13:37.497359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:08.254 [2024-11-28 05:13:37.497365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:08.254 [2024-11-28 05:13:37.497370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:08.254 [2024-11-28 05:13:37.497377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.254 [2024-11-28 05:13:37.497384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:08.254 [2024-11-28 05:13:37.497391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:24:08.254 [2024-11-28 05:13:37.497398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.499210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.255 [2024-11-28 05:13:37.499230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:08.255 [2024-11-28 05:13:37.499237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:24:08.255 [2024-11-28 05:13:37.499245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.499331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.255 [2024-11-28 05:13:37.499338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:08.255 [2024-11-28 05:13:37.499345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:08.255 [2024-11-28 05:13:37.499351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.504902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.255 [2024-11-28 05:13:37.504930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:08.255 [2024-11-28 05:13:37.504938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.255 [2024-11-28 05:13:37.504944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.504992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.255 [2024-11-28 05:13:37.505000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:08.255 [2024-11-28 05:13:37.505007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.255 [2024-11-28 05:13:37.505013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.505061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.255 [2024-11-28 05:13:37.505071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:08.255 [2024-11-28 05:13:37.505079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.255 [2024-11-28 05:13:37.505085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.505097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.255 [2024-11-28 05:13:37.505104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:08.255 [2024-11-28 05:13:37.505110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.255 [2024-11-28 05:13:37.505117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.515688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.255 [2024-11-28 05:13:37.515718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:08.255 [2024-11-28 05:13:37.515727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.255 [2024-11-28 05:13:37.515733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.524075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.255 [2024-11-28 05:13:37.524105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:08.255 [2024-11-28 05:13:37.524124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.255 [2024-11-28 05:13:37.524134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.524205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.255 [2024-11-28 05:13:37.524214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:08.255 [2024-11-28 05:13:37.524220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.255 [2024-11-28 05:13:37.524227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.524247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.255 [2024-11-28 05:13:37.524254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:08.255 [2024-11-28 05:13:37.524262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.255 [2024-11-28 05:13:37.524268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.524326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.255 [2024-11-28 05:13:37.524337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:08.255 [2024-11-28 05:13:37.524344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.255 [2024-11-28 05:13:37.524350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.524372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.255 [2024-11-28 05:13:37.524379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:08.255 [2024-11-28 05:13:37.524386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.255 [2024-11-28 05:13:37.524393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.524425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.255 [2024-11-28 05:13:37.524435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:08.255 [2024-11-28 05:13:37.524441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.255 [2024-11-28 05:13:37.524449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.524486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:08.255 [2024-11-28 05:13:37.524494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:08.255 [2024-11-28 05:13:37.524500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:08.255 [2024-11-28 05:13:37.524509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.255 [2024-11-28 05:13:37.524627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 316.424 ms, result 0 00:24:08.516 00:24:08.516 00:24:08.516 05:13:37 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:11.058 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:11.058 05:13:39 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:11.058 05:13:39 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:11.058 05:13:39 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:11.058 05:13:39 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:11.058 05:13:39 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:11.058 Process with pid 88014 is not found 00:24:11.058 Remove shared memory files 00:24:11.058 05:13:39 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 88014 00:24:11.058 05:13:39 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88014 ']' 00:24:11.058 05:13:39 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88014 00:24:11.058 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88014) - No such process 00:24:11.058 05:13:39 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 88014 is not found' 00:24:11.058 05:13:39 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:11.058 05:13:39 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:11.059 05:13:39 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:11.059 05:13:39 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:11.059 05:13:39 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:11.059 05:13:39 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:11.059 05:13:39 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:11.059 00:24:11.059 real 4m39.106s 00:24:11.059 user 4m27.405s 00:24:11.059 sys 0m11.273s 00:24:11.059 05:13:39 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.059 05:13:39 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:11.059 ************************************ 00:24:11.059 END TEST ftl_restore 00:24:11.059 ************************************ 00:24:11.059 05:13:40 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:11.059 05:13:40 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:11.059 05:13:40 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.059 05:13:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:11.059 ************************************ 00:24:11.059 START TEST ftl_dirty_shutdown 00:24:11.059 ************************************ 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:11.059 * Looking for test storage... 00:24:11.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:11.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.059 --rc genhtml_branch_coverage=1 00:24:11.059 --rc genhtml_function_coverage=1 00:24:11.059 --rc genhtml_legend=1 00:24:11.059 --rc geninfo_all_blocks=1 00:24:11.059 --rc geninfo_unexecuted_blocks=1 00:24:11.059 00:24:11.059 ' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:11.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.059 --rc genhtml_branch_coverage=1 00:24:11.059 --rc genhtml_function_coverage=1 00:24:11.059 --rc genhtml_legend=1 00:24:11.059 --rc geninfo_all_blocks=1 00:24:11.059 --rc geninfo_unexecuted_blocks=1 00:24:11.059 00:24:11.059 ' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:11.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.059 --rc genhtml_branch_coverage=1 00:24:11.059 --rc genhtml_function_coverage=1 00:24:11.059 --rc genhtml_legend=1 00:24:11.059 --rc geninfo_all_blocks=1 00:24:11.059 --rc geninfo_unexecuted_blocks=1 00:24:11.059 00:24:11.059 ' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:11.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.059 --rc genhtml_branch_coverage=1 00:24:11.059 --rc genhtml_function_coverage=1 00:24:11.059 --rc genhtml_legend=1 00:24:11.059 --rc geninfo_all_blocks=1 00:24:11.059 --rc geninfo_unexecuted_blocks=1 00:24:11.059 00:24:11.059 ' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=90958 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 90958 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 90958 ']' 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.059 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.060 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.060 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.060 05:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:11.060 [2024-11-28 05:13:40.303861] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:24:11.060 [2024-11-28 05:13:40.304014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90958 ] 00:24:11.320 [2024-11-28 05:13:40.449577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.320 [2024-11-28 05:13:40.490224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.892 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.892 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:11.892 05:13:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:11.892 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:11.892 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:11.892 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:11.892 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:11.892 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:12.152 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:12.152 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:12.152 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:12.152 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:12.152 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:12.152 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:12.152 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:12.152 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:12.413 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:12.413 { 00:24:12.413 "name": "nvme0n1", 00:24:12.413 "aliases": [ 00:24:12.413 "e7d8018f-0b27-498d-a0e0-4b13e9d5743d" 00:24:12.413 ], 00:24:12.413 "product_name": "NVMe disk", 00:24:12.413 "block_size": 4096, 00:24:12.413 "num_blocks": 1310720, 00:24:12.413 "uuid": "e7d8018f-0b27-498d-a0e0-4b13e9d5743d", 00:24:12.413 "numa_id": -1, 00:24:12.413 "assigned_rate_limits": { 00:24:12.413 "rw_ios_per_sec": 0, 00:24:12.413 "rw_mbytes_per_sec": 0, 00:24:12.413 "r_mbytes_per_sec": 0, 00:24:12.413 "w_mbytes_per_sec": 0 00:24:12.413 }, 00:24:12.413 "claimed": true, 00:24:12.413 "claim_type": "read_many_write_one", 00:24:12.413 "zoned": false, 00:24:12.413 "supported_io_types": { 00:24:12.413 "read": true, 00:24:12.413 "write": true, 00:24:12.413 "unmap": true, 00:24:12.413 "flush": true, 00:24:12.413 "reset": true, 00:24:12.413 "nvme_admin": true, 00:24:12.413 "nvme_io": true, 00:24:12.413 "nvme_io_md": false, 00:24:12.413 "write_zeroes": true, 00:24:12.413 "zcopy": false, 00:24:12.413 "get_zone_info": false, 00:24:12.413 "zone_management": false, 00:24:12.413 "zone_append": false, 00:24:12.413 "compare": true, 00:24:12.413 "compare_and_write": false, 00:24:12.413 "abort": true, 00:24:12.413 "seek_hole": false, 00:24:12.413 "seek_data": false, 00:24:12.413 "copy": true, 00:24:12.413 "nvme_iov_md": false 00:24:12.413 }, 00:24:12.413 "driver_specific": { 00:24:12.413 "nvme": [ 00:24:12.413 { 00:24:12.413 "pci_address": "0000:00:11.0", 00:24:12.413 "trid": { 00:24:12.413 "trtype": "PCIe", 00:24:12.413 "traddr": "0000:00:11.0" 00:24:12.413 }, 00:24:12.413 "ctrlr_data": { 00:24:12.413 "cntlid": 0, 00:24:12.413 "vendor_id": "0x1b36", 00:24:12.413 "model_number": "QEMU NVMe Ctrl", 00:24:12.413 "serial_number": "12341", 00:24:12.413 "firmware_revision": "8.0.0", 00:24:12.413 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:12.413 "oacs": { 00:24:12.413 "security": 0, 00:24:12.413 "format": 1, 00:24:12.413 "firmware": 0, 00:24:12.413 "ns_manage": 1 00:24:12.413 }, 00:24:12.413 "multi_ctrlr": false, 00:24:12.413 "ana_reporting": false 00:24:12.413 }, 00:24:12.413 "vs": { 00:24:12.413 "nvme_version": "1.4" 00:24:12.413 }, 00:24:12.413 "ns_data": { 00:24:12.413 "id": 1, 00:24:12.413 "can_share": false 00:24:12.413 } 00:24:12.413 } 00:24:12.413 ], 00:24:12.413 "mp_policy": "active_passive" 00:24:12.413 } 00:24:12.413 } 00:24:12.413 ]' 00:24:12.413 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:12.413 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:12.413 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:12.413 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:12.413 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:12.413 05:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:12.413 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:12.413 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:12.413 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:12.413 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:12.413 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:12.673 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=28eb41a2-cb29-47c3-9a24-fd7e50964750 00:24:12.673 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:12.673 05:13:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 28eb41a2-cb29-47c3-9a24-fd7e50964750 00:24:12.934 05:13:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:13.193 05:13:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=8a5325fe-a140-44e6-a7eb-71e15ec38500 00:24:13.193 05:13:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8a5325fe-a140-44e6-a7eb-71e15ec38500 00:24:13.453 05:13:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=5f2399d6-6541-4464-9bae-9021eea000fc 00:24:13.453 05:13:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5f2399d6-6541-4464-9bae-9021eea000fc 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=5f2399d6-6541-4464-9bae-9021eea000fc 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5f2399d6-6541-4464-9bae-9021eea000fc 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5f2399d6-6541-4464-9bae-9021eea000fc 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5f2399d6-6541-4464-9bae-9021eea000fc 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:13.454 { 00:24:13.454 "name": "5f2399d6-6541-4464-9bae-9021eea000fc", 00:24:13.454 "aliases": [ 00:24:13.454 "lvs/nvme0n1p0" 00:24:13.454 ], 00:24:13.454 "product_name": "Logical Volume", 00:24:13.454 "block_size": 4096, 00:24:13.454 "num_blocks": 26476544, 00:24:13.454 "uuid": "5f2399d6-6541-4464-9bae-9021eea000fc", 00:24:13.454 "assigned_rate_limits": { 00:24:13.454 "rw_ios_per_sec": 0, 00:24:13.454 "rw_mbytes_per_sec": 0, 00:24:13.454 "r_mbytes_per_sec": 0, 00:24:13.454 "w_mbytes_per_sec": 0 00:24:13.454 }, 00:24:13.454 "claimed": false, 00:24:13.454 "zoned": false, 00:24:13.454 "supported_io_types": { 00:24:13.454 "read": true, 00:24:13.454 "write": true, 00:24:13.454 "unmap": true, 00:24:13.454 "flush": false, 00:24:13.454 "reset": true, 00:24:13.454 "nvme_admin": false, 00:24:13.454 "nvme_io": false, 00:24:13.454 "nvme_io_md": false, 00:24:13.454 "write_zeroes": true, 00:24:13.454 "zcopy": false, 00:24:13.454 "get_zone_info": false, 00:24:13.454 "zone_management": false, 00:24:13.454 "zone_append": false, 00:24:13.454 "compare": false, 00:24:13.454 "compare_and_write": false, 00:24:13.454 "abort": false, 00:24:13.454 "seek_hole": true, 00:24:13.454 "seek_data": true, 00:24:13.454 "copy": false, 00:24:13.454 "nvme_iov_md": false 00:24:13.454 }, 00:24:13.454 "driver_specific": { 00:24:13.454 "lvol": { 00:24:13.454 "lvol_store_uuid": "8a5325fe-a140-44e6-a7eb-71e15ec38500", 00:24:13.454 "base_bdev": "nvme0n1", 00:24:13.454 "thin_provision": true, 00:24:13.454 "num_allocated_clusters": 0, 00:24:13.454 "snapshot": false, 00:24:13.454 "clone": false, 00:24:13.454 "esnap_clone": false 00:24:13.454 } 00:24:13.454 } 00:24:13.454 } 00:24:13.454 ]' 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:13.454 05:13:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:13.713 05:13:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:13.713 05:13:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:13.713 05:13:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:13.713 05:13:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:13.713 05:13:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:13.713 05:13:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:13.973 05:13:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:13.973 05:13:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:13.973 05:13:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 5f2399d6-6541-4464-9bae-9021eea000fc 00:24:13.973 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5f2399d6-6541-4464-9bae-9021eea000fc 00:24:13.973 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:13.973 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:13.973 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:13.973 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5f2399d6-6541-4464-9bae-9021eea000fc 00:24:13.973 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:13.973 { 00:24:13.973 "name": "5f2399d6-6541-4464-9bae-9021eea000fc", 00:24:13.973 "aliases": [ 00:24:13.973 "lvs/nvme0n1p0" 00:24:13.973 ], 00:24:13.973 "product_name": "Logical Volume", 00:24:13.973 "block_size": 4096, 00:24:13.973 "num_blocks": 26476544, 00:24:13.973 "uuid": "5f2399d6-6541-4464-9bae-9021eea000fc", 00:24:13.973 "assigned_rate_limits": { 00:24:13.973 "rw_ios_per_sec": 0, 00:24:13.973 "rw_mbytes_per_sec": 0, 00:24:13.973 "r_mbytes_per_sec": 0, 00:24:13.973 "w_mbytes_per_sec": 0 00:24:13.973 }, 00:24:13.973 "claimed": false, 00:24:13.973 "zoned": false, 00:24:13.973 "supported_io_types": { 00:24:13.973 "read": true, 00:24:13.973 "write": true, 00:24:13.973 "unmap": true, 00:24:13.974 "flush": false, 00:24:13.974 "reset": true, 00:24:13.974 "nvme_admin": false, 00:24:13.974 "nvme_io": false, 00:24:13.974 "nvme_io_md": false, 00:24:13.974 "write_zeroes": true, 00:24:13.974 "zcopy": false, 00:24:13.974 "get_zone_info": false, 00:24:13.974 "zone_management": false, 00:24:13.974 "zone_append": false, 00:24:13.974 "compare": false, 00:24:13.974 "compare_and_write": false, 00:24:13.974 "abort": false, 00:24:13.974 "seek_hole": true, 00:24:13.974 "seek_data": true, 00:24:13.974 "copy": false, 00:24:13.974 "nvme_iov_md": false 00:24:13.974 }, 00:24:13.974 "driver_specific": { 00:24:13.974 "lvol": { 00:24:13.974 "lvol_store_uuid": "8a5325fe-a140-44e6-a7eb-71e15ec38500", 00:24:13.974 "base_bdev": "nvme0n1", 00:24:13.974 "thin_provision": true, 00:24:13.974 "num_allocated_clusters": 0, 00:24:13.974 "snapshot": false, 00:24:13.974 "clone": false, 00:24:13.974 "esnap_clone": false 00:24:13.974 } 00:24:13.974 } 00:24:13.974 } 00:24:13.974 ]' 00:24:13.974 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 5f2399d6-6541-4464-9bae-9021eea000fc 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5f2399d6-6541-4464-9bae-9021eea000fc 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:14.233 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5f2399d6-6541-4464-9bae-9021eea000fc 00:24:14.492 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:14.492 { 00:24:14.492 "name": "5f2399d6-6541-4464-9bae-9021eea000fc", 00:24:14.492 "aliases": [ 00:24:14.492 "lvs/nvme0n1p0" 00:24:14.492 ], 00:24:14.492 "product_name": "Logical Volume", 00:24:14.492 "block_size": 4096, 00:24:14.492 "num_blocks": 26476544, 00:24:14.492 "uuid": "5f2399d6-6541-4464-9bae-9021eea000fc", 00:24:14.492 "assigned_rate_limits": { 00:24:14.492 "rw_ios_per_sec": 0, 00:24:14.492 "rw_mbytes_per_sec": 0, 00:24:14.492 "r_mbytes_per_sec": 0, 00:24:14.492 "w_mbytes_per_sec": 0 00:24:14.492 }, 00:24:14.492 "claimed": false, 00:24:14.492 "zoned": false, 00:24:14.492 "supported_io_types": { 00:24:14.492 "read": true, 00:24:14.492 "write": true, 00:24:14.492 "unmap": true, 00:24:14.492 "flush": false, 00:24:14.492 "reset": true, 00:24:14.492 "nvme_admin": false, 00:24:14.492 "nvme_io": false, 00:24:14.492 "nvme_io_md": false, 00:24:14.492 "write_zeroes": true, 00:24:14.492 "zcopy": false, 00:24:14.492 "get_zone_info": false, 00:24:14.492 "zone_management": false, 00:24:14.492 "zone_append": false, 00:24:14.492 "compare": false, 00:24:14.492 "compare_and_write": false, 00:24:14.492 "abort": false, 00:24:14.492 "seek_hole": true, 00:24:14.492 "seek_data": true, 00:24:14.492 "copy": false, 00:24:14.492 "nvme_iov_md": false 00:24:14.492 }, 00:24:14.492 "driver_specific": { 00:24:14.492 "lvol": { 00:24:14.492 "lvol_store_uuid": "8a5325fe-a140-44e6-a7eb-71e15ec38500", 00:24:14.492 "base_bdev": "nvme0n1", 00:24:14.492 "thin_provision": true, 00:24:14.492 "num_allocated_clusters": 0, 00:24:14.492 "snapshot": false, 00:24:14.492 "clone": false, 00:24:14.492 "esnap_clone": false 00:24:14.492 } 00:24:14.492 } 00:24:14.492 } 00:24:14.492 ]' 00:24:14.492 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:14.492 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:14.492 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:14.752 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:14.752 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:14.752 05:13:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:14.752 05:13:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:14.753 05:13:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5f2399d6-6541-4464-9bae-9021eea000fc --l2p_dram_limit 10' 00:24:14.753 05:13:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:14.753 05:13:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:14.753 05:13:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:14.753 05:13:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5f2399d6-6541-4464-9bae-9021eea000fc --l2p_dram_limit 10 -c nvc0n1p0 00:24:14.753 [2024-11-28 05:13:43.975440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.753 [2024-11-28 05:13:43.975493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:14.753 [2024-11-28 05:13:43.975505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:14.753 [2024-11-28 05:13:43.975517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.753 [2024-11-28 05:13:43.975566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.753 [2024-11-28 05:13:43.975578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:14.753 [2024-11-28 05:13:43.975585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:14.753 [2024-11-28 05:13:43.975594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.753 [2024-11-28 05:13:43.975609] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:14.753 [2024-11-28 05:13:43.975834] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:14.753 [2024-11-28 05:13:43.975848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.753 [2024-11-28 05:13:43.975858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:14.753 [2024-11-28 05:13:43.975865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:24:14.753 [2024-11-28 05:13:43.975878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.753 [2024-11-28 05:13:43.975904] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8895fcb2-4145-407f-a2a5-484f995f7d99 00:24:14.753 [2024-11-28 05:13:43.977206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.753 [2024-11-28 05:13:43.977337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:14.753 [2024-11-28 05:13:43.977353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:14.753 [2024-11-28 05:13:43.977360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.753 [2024-11-28 05:13:43.984236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.753 [2024-11-28 05:13:43.984333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:14.753 [2024-11-28 05:13:43.984351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.797 ms 00:24:14.753 [2024-11-28 05:13:43.984358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.753 [2024-11-28 05:13:43.984426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.753 [2024-11-28 05:13:43.984435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:14.753 [2024-11-28 05:13:43.984444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:14.753 [2024-11-28 05:13:43.984450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.753 [2024-11-28 05:13:43.984490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.753 [2024-11-28 05:13:43.984498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:14.753 [2024-11-28 05:13:43.984506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:14.753 [2024-11-28 05:13:43.984515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.753 [2024-11-28 05:13:43.984535] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:14.753 [2024-11-28 05:13:43.986199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.753 [2024-11-28 05:13:43.986219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:14.753 [2024-11-28 05:13:43.986227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:24:14.753 [2024-11-28 05:13:43.986235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.753 [2024-11-28 05:13:43.986262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.753 [2024-11-28 05:13:43.986271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:14.753 [2024-11-28 05:13:43.986281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:14.753 [2024-11-28 05:13:43.986291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.753 [2024-11-28 05:13:43.986308] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:14.753 [2024-11-28 05:13:43.986427] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:14.753 [2024-11-28 05:13:43.986441] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:14.753 [2024-11-28 05:13:43.986454] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:14.753 [2024-11-28 05:13:43.986463] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:14.753 [2024-11-28 05:13:43.986474] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:14.753 [2024-11-28 05:13:43.986482] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:14.753 [2024-11-28 05:13:43.986491] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:14.753 [2024-11-28 05:13:43.986497] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:14.753 [2024-11-28 05:13:43.986504] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:14.753 [2024-11-28 05:13:43.986512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.753 [2024-11-28 05:13:43.986519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:14.753 [2024-11-28 05:13:43.986526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:24:14.753 [2024-11-28 05:13:43.986533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.753 [2024-11-28 05:13:43.986599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.753 [2024-11-28 05:13:43.986615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:14.753 [2024-11-28 05:13:43.986621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:14.753 [2024-11-28 05:13:43.986631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.753 [2024-11-28 05:13:43.986704] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:14.753 [2024-11-28 05:13:43.986714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:14.753 [2024-11-28 05:13:43.986720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:14.753 [2024-11-28 05:13:43.986728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.753 [2024-11-28 05:13:43.986734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:14.753 [2024-11-28 05:13:43.986741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:14.753 [2024-11-28 05:13:43.986746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:14.753 [2024-11-28 05:13:43.986755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:14.753 [2024-11-28 05:13:43.986761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:14.753 [2024-11-28 05:13:43.986768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:14.753 [2024-11-28 05:13:43.986773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:14.753 [2024-11-28 05:13:43.986781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:14.753 [2024-11-28 05:13:43.986787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:14.753 [2024-11-28 05:13:43.986795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:14.753 [2024-11-28 05:13:43.986801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:14.753 [2024-11-28 05:13:43.986808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.753 [2024-11-28 05:13:43.986813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:14.753 [2024-11-28 05:13:43.986820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:14.753 [2024-11-28 05:13:43.986826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.753 [2024-11-28 05:13:43.986835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:14.753 [2024-11-28 05:13:43.986843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:14.753 [2024-11-28 05:13:43.986851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.754 [2024-11-28 05:13:43.986856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:14.754 [2024-11-28 05:13:43.986865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:14.754 [2024-11-28 05:13:43.986871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.754 [2024-11-28 05:13:43.986878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:14.754 [2024-11-28 05:13:43.986884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:14.754 [2024-11-28 05:13:43.986892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.754 [2024-11-28 05:13:43.986899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:14.754 [2024-11-28 05:13:43.986908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:14.754 [2024-11-28 05:13:43.986914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.754 [2024-11-28 05:13:43.986921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:14.754 [2024-11-28 05:13:43.986928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:14.754 [2024-11-28 05:13:43.986937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:14.754 [2024-11-28 05:13:43.986943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:14.754 [2024-11-28 05:13:43.986950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:14.754 [2024-11-28 05:13:43.986956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:14.754 [2024-11-28 05:13:43.986963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:14.754 [2024-11-28 05:13:43.986970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:14.754 [2024-11-28 05:13:43.986977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.754 [2024-11-28 05:13:43.986983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:14.754 [2024-11-28 05:13:43.986990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:14.754 [2024-11-28 05:13:43.986996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.754 [2024-11-28 05:13:43.987002] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:14.754 [2024-11-28 05:13:43.987014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:14.754 [2024-11-28 05:13:43.987023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:14.754 [2024-11-28 05:13:43.987032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.754 [2024-11-28 05:13:43.987042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:14.754 [2024-11-28 05:13:43.987048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:14.754 [2024-11-28 05:13:43.987056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:14.754 [2024-11-28 05:13:43.987062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:14.754 [2024-11-28 05:13:43.987070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:14.754 [2024-11-28 05:13:43.987077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:14.754 [2024-11-28 05:13:43.987089] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:14.754 [2024-11-28 05:13:43.987097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:14.754 [2024-11-28 05:13:43.987110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:14.754 [2024-11-28 05:13:43.987116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:14.754 [2024-11-28 05:13:43.987124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:14.754 [2024-11-28 05:13:43.987131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:14.754 [2024-11-28 05:13:43.987139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:14.754 [2024-11-28 05:13:43.987146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:14.754 [2024-11-28 05:13:43.987155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:14.754 [2024-11-28 05:13:43.987161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:14.754 [2024-11-28 05:13:43.987170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:14.754 [2024-11-28 05:13:43.987197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:14.754 [2024-11-28 05:13:43.987206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:14.754 [2024-11-28 05:13:43.987212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:14.754 [2024-11-28 05:13:43.987220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:14.754 [2024-11-28 05:13:43.987226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:14.754 [2024-11-28 05:13:43.987233] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:14.754 [2024-11-28 05:13:43.987239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:14.754 [2024-11-28 05:13:43.987247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:14.754 [2024-11-28 05:13:43.987252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:14.754 [2024-11-28 05:13:43.987259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:14.754 [2024-11-28 05:13:43.987265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:14.754 [2024-11-28 05:13:43.987273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.754 [2024-11-28 05:13:43.987279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:14.754 [2024-11-28 05:13:43.987292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:24:14.754 [2024-11-28 05:13:43.987298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.754 [2024-11-28 05:13:43.987338] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:14.754 [2024-11-28 05:13:43.987346] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:18.958 [2024-11-28 05:13:47.931406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:47.931746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:18.958 [2024-11-28 05:13:47.931782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3944.044 ms 00:24:18.958 [2024-11-28 05:13:47.931793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:47.946375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:47.946582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:18.958 [2024-11-28 05:13:47.946610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.454 ms 00:24:18.958 [2024-11-28 05:13:47.946620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:47.946788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:47.946800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:18.958 [2024-11-28 05:13:47.946813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:18.958 [2024-11-28 05:13:47.946821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:47.959842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:47.959894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:18.958 [2024-11-28 05:13:47.959908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.980 ms 00:24:18.958 [2024-11-28 05:13:47.959921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:47.959956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:47.959965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:18.958 [2024-11-28 05:13:47.959976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:18.958 [2024-11-28 05:13:47.959984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:47.960590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:47.960634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:18.958 [2024-11-28 05:13:47.960649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:24:18.958 [2024-11-28 05:13:47.960658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:47.960796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:47.960815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:18.958 [2024-11-28 05:13:47.960827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:18.958 [2024-11-28 05:13:47.960837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:47.969879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:47.970068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:18.958 [2024-11-28 05:13:47.970093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.014 ms 00:24:18.958 [2024-11-28 05:13:47.970102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:47.992023] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:18.958 [2024-11-28 05:13:47.996610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:47.996667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:18.958 [2024-11-28 05:13:47.996681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.358 ms 00:24:18.958 [2024-11-28 05:13:47.996693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:48.083719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:48.083989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:18.958 [2024-11-28 05:13:48.084014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.979 ms 00:24:18.958 [2024-11-28 05:13:48.084029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:48.084273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:48.084290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:18.958 [2024-11-28 05:13:48.084300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:24:18.958 [2024-11-28 05:13:48.084311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:48.090674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:48.090868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:18.958 [2024-11-28 05:13:48.090893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.339 ms 00:24:18.958 [2024-11-28 05:13:48.090905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:48.096440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:48.096498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:18.958 [2024-11-28 05:13:48.096510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.368 ms 00:24:18.958 [2024-11-28 05:13:48.096520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:48.096862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:48.096882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:18.958 [2024-11-28 05:13:48.096891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:24:18.958 [2024-11-28 05:13:48.096904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:48.143744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:48.143810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:18.958 [2024-11-28 05:13:48.143827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.817 ms 00:24:18.958 [2024-11-28 05:13:48.143839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:48.151147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:48.151234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:18.958 [2024-11-28 05:13:48.151246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.227 ms 00:24:18.958 [2024-11-28 05:13:48.151256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:48.157485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:48.157692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:18.958 [2024-11-28 05:13:48.157711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.178 ms 00:24:18.958 [2024-11-28 05:13:48.157722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:48.164403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:48.164594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:18.958 [2024-11-28 05:13:48.164614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.637 ms 00:24:18.958 [2024-11-28 05:13:48.164627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:48.164677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:48.164690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:18.958 [2024-11-28 05:13:48.164699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:18.958 [2024-11-28 05:13:48.164709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:48.164786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.958 [2024-11-28 05:13:48.164799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:18.958 [2024-11-28 05:13:48.164808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:18.958 [2024-11-28 05:13:48.164820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.958 [2024-11-28 05:13:48.166055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4190.063 ms, result 0 00:24:18.958 { 00:24:18.958 "name": "ftl0", 00:24:18.958 "uuid": "8895fcb2-4145-407f-a2a5-484f995f7d99" 00:24:18.958 } 00:24:18.958 05:13:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:18.958 05:13:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:19.218 05:13:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:19.218 05:13:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:19.218 05:13:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:19.479 /dev/nbd0 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:19.479 1+0 records in 00:24:19.479 1+0 records out 00:24:19.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435806 s, 9.4 MB/s 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:19.479 05:13:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:19.479 [2024-11-28 05:13:48.731445] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:24:19.479 [2024-11-28 05:13:48.731582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91105 ] 00:24:19.739 [2024-11-28 05:13:48.879765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.739 [2024-11-28 05:13:48.908901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.120  [2024-11-28T05:13:51.398Z] Copying: 192/1024 [MB] (192 MBps) [2024-11-28T05:13:52.335Z] Copying: 381/1024 [MB] (189 MBps) [2024-11-28T05:13:53.270Z] Copying: 629/1024 [MB] (247 MBps) [2024-11-28T05:13:53.528Z] Copying: 888/1024 [MB] (259 MBps) [2024-11-28T05:13:53.786Z] Copying: 1024/1024 [MB] (average 226 MBps) 00:24:24.502 00:24:24.502 05:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:26.405 05:13:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:26.405 [2024-11-28 05:13:55.659029] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:24:26.405 [2024-11-28 05:13:55.659125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91182 ] 00:24:26.663 [2024-11-28 05:13:55.796034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.663 [2024-11-28 05:13:55.812702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.598  [2024-11-28T05:13:58.259Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-28T05:13:59.197Z] Copying: 61/1024 [MB] (31 MBps) [2024-11-28T05:14:00.135Z] Copying: 91/1024 [MB] (30 MBps) [2024-11-28T05:14:01.075Z] Copying: 123/1024 [MB] (31 MBps) [2024-11-28T05:14:02.009Z] Copying: 154/1024 [MB] (30 MBps) [2024-11-28T05:14:02.946Z] Copying: 189/1024 [MB] (34 MBps) [2024-11-28T05:14:03.885Z] Copying: 225/1024 [MB] (36 MBps) [2024-11-28T05:14:05.265Z] Copying: 255/1024 [MB] (29 MBps) [2024-11-28T05:14:06.200Z] Copying: 286/1024 [MB] (31 MBps) [2024-11-28T05:14:07.138Z] Copying: 322/1024 [MB] (35 MBps) [2024-11-28T05:14:08.074Z] Copying: 353/1024 [MB] (30 MBps) [2024-11-28T05:14:09.011Z] Copying: 385/1024 [MB] (32 MBps) [2024-11-28T05:14:09.949Z] Copying: 416/1024 [MB] (31 MBps) [2024-11-28T05:14:10.886Z] Copying: 448/1024 [MB] (32 MBps) [2024-11-28T05:14:12.263Z] Copying: 483/1024 [MB] (34 MBps) [2024-11-28T05:14:13.201Z] Copying: 516/1024 [MB] (33 MBps) [2024-11-28T05:14:14.138Z] Copying: 546/1024 [MB] (29 MBps) [2024-11-28T05:14:15.075Z] Copying: 578/1024 [MB] (32 MBps) [2024-11-28T05:14:16.013Z] Copying: 611/1024 [MB] (33 MBps) [2024-11-28T05:14:16.952Z] Copying: 643/1024 [MB] (31 MBps) [2024-11-28T05:14:17.886Z] Copying: 677/1024 [MB] (34 MBps) [2024-11-28T05:14:19.265Z] Copying: 714/1024 [MB] (36 MBps) [2024-11-28T05:14:20.203Z] Copying: 745/1024 [MB] (31 MBps) [2024-11-28T05:14:21.141Z] Copying: 780/1024 [MB] (35 MBps) [2024-11-28T05:14:22.079Z] Copying: 810/1024 [MB] (30 MBps) [2024-11-28T05:14:23.055Z] Copying: 847/1024 [MB] (36 MBps) [2024-11-28T05:14:24.009Z] Copying: 879/1024 [MB] (32 MBps) [2024-11-28T05:14:24.947Z] Copying: 908/1024 [MB] (28 MBps) [2024-11-28T05:14:25.881Z] Copying: 938/1024 [MB] (30 MBps) [2024-11-28T05:14:27.259Z] Copying: 969/1024 [MB] (30 MBps) [2024-11-28T05:14:27.517Z] Copying: 1000/1024 [MB] (30 MBps) [2024-11-28T05:14:27.776Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:24:58.492 00:24:58.492 05:14:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:58.492 05:14:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:58.751 05:14:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:58.751 [2024-11-28 05:14:28.011816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.751 [2024-11-28 05:14:28.011863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:58.751 [2024-11-28 05:14:28.011877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:58.751 [2024-11-28 05:14:28.011884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.751 [2024-11-28 05:14:28.011905] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:58.751 [2024-11-28 05:14:28.012438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.751 [2024-11-28 05:14:28.012466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:58.751 [2024-11-28 05:14:28.012473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:24:58.751 [2024-11-28 05:14:28.012481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.751 [2024-11-28 05:14:28.014388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.751 [2024-11-28 05:14:28.014419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:58.751 [2024-11-28 05:14:28.014428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.889 ms 00:24:58.751 [2024-11-28 05:14:28.014436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.751 [2024-11-28 05:14:28.029359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.751 [2024-11-28 05:14:28.029390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:58.751 [2024-11-28 05:14:28.029401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.908 ms 00:24:58.751 [2024-11-28 05:14:28.029409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.751 [2024-11-28 05:14:28.034053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.751 [2024-11-28 05:14:28.034078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:58.751 [2024-11-28 05:14:28.034087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.618 ms 00:24:58.751 [2024-11-28 05:14:28.034095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.013 [2024-11-28 05:14:28.035624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.013 [2024-11-28 05:14:28.035767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:59.013 [2024-11-28 05:14:28.035780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:24:59.013 [2024-11-28 05:14:28.035789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.013 [2024-11-28 05:14:28.040914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.013 [2024-11-28 05:14:28.040948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:59.013 [2024-11-28 05:14:28.040956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.096 ms 00:24:59.013 [2024-11-28 05:14:28.040964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.013 [2024-11-28 05:14:28.041060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.013 [2024-11-28 05:14:28.041070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:59.013 [2024-11-28 05:14:28.041077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:59.013 [2024-11-28 05:14:28.041091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.013 [2024-11-28 05:14:28.043824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.013 [2024-11-28 05:14:28.043855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:59.013 [2024-11-28 05:14:28.043861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:24:59.013 [2024-11-28 05:14:28.043868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.013 [2024-11-28 05:14:28.045818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.013 [2024-11-28 05:14:28.045849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:59.013 [2024-11-28 05:14:28.045856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.925 ms 00:24:59.013 [2024-11-28 05:14:28.045863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.013 [2024-11-28 05:14:28.047465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.013 [2024-11-28 05:14:28.047493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:59.013 [2024-11-28 05:14:28.047500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:24:59.013 [2024-11-28 05:14:28.047508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.013 [2024-11-28 05:14:28.049112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.013 [2024-11-28 05:14:28.049141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:59.013 [2024-11-28 05:14:28.049148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.559 ms 00:24:59.013 [2024-11-28 05:14:28.049155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.013 [2024-11-28 05:14:28.049191] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:59.013 [2024-11-28 05:14:28.049210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:59.013 [2024-11-28 05:14:28.049587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:59.014 [2024-11-28 05:14:28.049934] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:59.014 [2024-11-28 05:14:28.049940] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8895fcb2-4145-407f-a2a5-484f995f7d99 00:24:59.014 [2024-11-28 05:14:28.049948] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:59.014 [2024-11-28 05:14:28.049954] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:59.014 [2024-11-28 05:14:28.049962] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:59.014 [2024-11-28 05:14:28.049968] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:59.014 [2024-11-28 05:14:28.049976] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:59.014 [2024-11-28 05:14:28.049982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:59.014 [2024-11-28 05:14:28.049989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:59.014 [2024-11-28 05:14:28.049994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:59.014 [2024-11-28 05:14:28.050006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:59.014 [2024-11-28 05:14:28.050012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.014 [2024-11-28 05:14:28.050023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:59.014 [2024-11-28 05:14:28.050031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:24:59.014 [2024-11-28 05:14:28.050039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.014 [2024-11-28 05:14:28.051799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.014 [2024-11-28 05:14:28.051824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:59.014 [2024-11-28 05:14:28.051832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.745 ms 00:24:59.014 [2024-11-28 05:14:28.051841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.014 [2024-11-28 05:14:28.051927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.014 [2024-11-28 05:14:28.051938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:59.014 [2024-11-28 05:14:28.051945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:59.014 [2024-11-28 05:14:28.051953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.014 [2024-11-28 05:14:28.058001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.014 [2024-11-28 05:14:28.058129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:59.014 [2024-11-28 05:14:28.058142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.014 [2024-11-28 05:14:28.058150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.014 [2024-11-28 05:14:28.058208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.014 [2024-11-28 05:14:28.058220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:59.014 [2024-11-28 05:14:28.058230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.014 [2024-11-28 05:14:28.058237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.014 [2024-11-28 05:14:28.058296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.014 [2024-11-28 05:14:28.058309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:59.014 [2024-11-28 05:14:28.058315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.014 [2024-11-28 05:14:28.058325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.014 [2024-11-28 05:14:28.058339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.014 [2024-11-28 05:14:28.058347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:59.014 [2024-11-28 05:14:28.058355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.014 [2024-11-28 05:14:28.058362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.014 [2024-11-28 05:14:28.069452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.014 [2024-11-28 05:14:28.069491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:59.015 [2024-11-28 05:14:28.069500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.015 [2024-11-28 05:14:28.069509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.015 [2024-11-28 05:14:28.078492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.015 [2024-11-28 05:14:28.078530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:59.015 [2024-11-28 05:14:28.078538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.015 [2024-11-28 05:14:28.078546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.015 [2024-11-28 05:14:28.078610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.015 [2024-11-28 05:14:28.078623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:59.015 [2024-11-28 05:14:28.078629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.015 [2024-11-28 05:14:28.078638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.015 [2024-11-28 05:14:28.078705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.015 [2024-11-28 05:14:28.078717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:59.015 [2024-11-28 05:14:28.078724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.015 [2024-11-28 05:14:28.078734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.015 [2024-11-28 05:14:28.078792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.015 [2024-11-28 05:14:28.078801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:59.015 [2024-11-28 05:14:28.078807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.015 [2024-11-28 05:14:28.078815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.015 [2024-11-28 05:14:28.078841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.015 [2024-11-28 05:14:28.078850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:59.015 [2024-11-28 05:14:28.078856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.015 [2024-11-28 05:14:28.078864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.015 [2024-11-28 05:14:28.078900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.015 [2024-11-28 05:14:28.078912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:59.015 [2024-11-28 05:14:28.078918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.015 [2024-11-28 05:14:28.078926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.015 [2024-11-28 05:14:28.078965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:59.015 [2024-11-28 05:14:28.078975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:59.015 [2024-11-28 05:14:28.078981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:59.015 [2024-11-28 05:14:28.078990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.015 [2024-11-28 05:14:28.079111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.257 ms, result 0 00:24:59.015 true 00:24:59.015 05:14:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 90958 00:24:59.015 05:14:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid90958 00:24:59.015 05:14:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:59.015 [2024-11-28 05:14:28.153006] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:24:59.015 [2024-11-28 05:14:28.153098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91526 ] 00:24:59.015 [2024-11-28 05:14:28.291568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.274 [2024-11-28 05:14:28.317644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.211  [2024-11-28T05:14:30.433Z] Copying: 256/1024 [MB] (256 MBps) [2024-11-28T05:14:31.812Z] Copying: 513/1024 [MB] (256 MBps) [2024-11-28T05:14:32.750Z] Copying: 766/1024 [MB] (253 MBps) [2024-11-28T05:14:32.750Z] Copying: 1015/1024 [MB] (248 MBps) [2024-11-28T05:14:32.750Z] Copying: 1024/1024 [MB] (average 253 MBps) 00:25:03.466 00:25:03.466 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 90958 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:03.466 05:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:03.466 [2024-11-28 05:14:32.641102] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:25:03.466 [2024-11-28 05:14:32.641215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91578 ] 00:25:03.727 [2024-11-28 05:14:32.773628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.727 [2024-11-28 05:14:32.798362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.727 [2024-11-28 05:14:32.899254] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:03.727 [2024-11-28 05:14:32.899313] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:03.727 [2024-11-28 05:14:32.961879] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:03.727 [2024-11-28 05:14:32.962533] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:03.727 [2024-11-28 05:14:32.963244] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:04.297 [2024-11-28 05:14:33.380271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.297 [2024-11-28 05:14:33.380308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:04.297 [2024-11-28 05:14:33.380319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:04.297 [2024-11-28 05:14:33.380330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.297 [2024-11-28 05:14:33.380371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.297 [2024-11-28 05:14:33.380379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:04.297 [2024-11-28 05:14:33.380385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:04.297 [2024-11-28 05:14:33.380394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.297 [2024-11-28 05:14:33.380410] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:04.297 [2024-11-28 05:14:33.380603] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:04.297 [2024-11-28 05:14:33.380634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.297 [2024-11-28 05:14:33.380643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:04.297 [2024-11-28 05:14:33.380650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:25:04.297 [2024-11-28 05:14:33.380656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.297 [2024-11-28 05:14:33.381910] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:04.297 [2024-11-28 05:14:33.384703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.297 [2024-11-28 05:14:33.384731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:04.297 [2024-11-28 05:14:33.384739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.795 ms 00:25:04.297 [2024-11-28 05:14:33.384745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.297 [2024-11-28 05:14:33.384790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.297 [2024-11-28 05:14:33.384802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:04.297 [2024-11-28 05:14:33.384809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:04.297 [2024-11-28 05:14:33.384814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.297 [2024-11-28 05:14:33.391011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.297 [2024-11-28 05:14:33.391036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:04.297 [2024-11-28 05:14:33.391044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.156 ms 00:25:04.297 [2024-11-28 05:14:33.391049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.297 [2024-11-28 05:14:33.391118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.297 [2024-11-28 05:14:33.391125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:04.297 [2024-11-28 05:14:33.391132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:04.297 [2024-11-28 05:14:33.391140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.297 [2024-11-28 05:14:33.391188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.297 [2024-11-28 05:14:33.391196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:04.297 [2024-11-28 05:14:33.391203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:04.297 [2024-11-28 05:14:33.391209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.297 [2024-11-28 05:14:33.391230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:04.297 [2024-11-28 05:14:33.392764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.297 [2024-11-28 05:14:33.392787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:04.297 [2024-11-28 05:14:33.392794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:25:04.297 [2024-11-28 05:14:33.392803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.297 [2024-11-28 05:14:33.392830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.297 [2024-11-28 05:14:33.392836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:04.297 [2024-11-28 05:14:33.392842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:04.297 [2024-11-28 05:14:33.392851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.297 [2024-11-28 05:14:33.392866] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:04.297 [2024-11-28 05:14:33.392882] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:04.298 [2024-11-28 05:14:33.392916] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:04.298 [2024-11-28 05:14:33.392931] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:04.298 [2024-11-28 05:14:33.393015] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:04.298 [2024-11-28 05:14:33.393024] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:04.298 [2024-11-28 05:14:33.393038] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:04.298 [2024-11-28 05:14:33.393049] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:04.298 [2024-11-28 05:14:33.393056] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:04.298 [2024-11-28 05:14:33.393062] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:04.298 [2024-11-28 05:14:33.393067] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:04.298 [2024-11-28 05:14:33.393073] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:04.298 [2024-11-28 05:14:33.393081] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:04.298 [2024-11-28 05:14:33.393088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.298 [2024-11-28 05:14:33.393094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:04.298 [2024-11-28 05:14:33.393100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:25:04.298 [2024-11-28 05:14:33.393105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.298 [2024-11-28 05:14:33.393168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.298 [2024-11-28 05:14:33.393187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:04.298 [2024-11-28 05:14:33.393194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:04.298 [2024-11-28 05:14:33.393199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.298 [2024-11-28 05:14:33.393279] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:04.298 [2024-11-28 05:14:33.393290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:04.298 [2024-11-28 05:14:33.393297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:04.298 [2024-11-28 05:14:33.393304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:04.298 [2024-11-28 05:14:33.393315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:04.298 [2024-11-28 05:14:33.393326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:04.298 [2024-11-28 05:14:33.393331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:04.298 [2024-11-28 05:14:33.393341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:04.298 [2024-11-28 05:14:33.393348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:04.298 [2024-11-28 05:14:33.393354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:04.298 [2024-11-28 05:14:33.393360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:04.298 [2024-11-28 05:14:33.393365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:04.298 [2024-11-28 05:14:33.393370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:04.298 [2024-11-28 05:14:33.393385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:04.298 [2024-11-28 05:14:33.393390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:04.298 [2024-11-28 05:14:33.393400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.298 [2024-11-28 05:14:33.393411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:04.298 [2024-11-28 05:14:33.393417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.298 [2024-11-28 05:14:33.393429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:04.298 [2024-11-28 05:14:33.393435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.298 [2024-11-28 05:14:33.393448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:04.298 [2024-11-28 05:14:33.393454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:04.298 [2024-11-28 05:14:33.393466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:04.298 [2024-11-28 05:14:33.393474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:04.298 [2024-11-28 05:14:33.393486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:04.298 [2024-11-28 05:14:33.393492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:04.298 [2024-11-28 05:14:33.393497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:04.298 [2024-11-28 05:14:33.393503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:04.298 [2024-11-28 05:14:33.393510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:04.298 [2024-11-28 05:14:33.393516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:04.298 [2024-11-28 05:14:33.393527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:04.298 [2024-11-28 05:14:33.393533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393539] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:04.298 [2024-11-28 05:14:33.393548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:04.298 [2024-11-28 05:14:33.393556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:04.298 [2024-11-28 05:14:33.393563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:04.298 [2024-11-28 05:14:33.393570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:04.298 [2024-11-28 05:14:33.393578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:04.298 [2024-11-28 05:14:33.393584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:04.298 [2024-11-28 05:14:33.393590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:04.298 [2024-11-28 05:14:33.393596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:04.298 [2024-11-28 05:14:33.393602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:04.298 [2024-11-28 05:14:33.393609] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:04.298 [2024-11-28 05:14:33.393618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:04.298 [2024-11-28 05:14:33.393625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:04.298 [2024-11-28 05:14:33.393633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:04.298 [2024-11-28 05:14:33.393639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:04.298 [2024-11-28 05:14:33.393645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:04.298 [2024-11-28 05:14:33.393652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:04.298 [2024-11-28 05:14:33.393663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:04.298 [2024-11-28 05:14:33.393670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:04.298 [2024-11-28 05:14:33.393685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:04.298 [2024-11-28 05:14:33.393692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:04.298 [2024-11-28 05:14:33.393700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:04.298 [2024-11-28 05:14:33.393707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:04.298 [2024-11-28 05:14:33.393713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:04.298 [2024-11-28 05:14:33.393719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:04.298 [2024-11-28 05:14:33.393726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:04.298 [2024-11-28 05:14:33.393732] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:04.298 [2024-11-28 05:14:33.393742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:04.298 [2024-11-28 05:14:33.393749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:04.298 [2024-11-28 05:14:33.393755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:04.298 [2024-11-28 05:14:33.393761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:04.298 [2024-11-28 05:14:33.393767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:04.298 [2024-11-28 05:14:33.393773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.298 [2024-11-28 05:14:33.393784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:04.298 [2024-11-28 05:14:33.393790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:25:04.299 [2024-11-28 05:14:33.393796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.404924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.404956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:04.299 [2024-11-28 05:14:33.404964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.088 ms 00:25:04.299 [2024-11-28 05:14:33.404971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.405035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.405044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:04.299 [2024-11-28 05:14:33.405053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:04.299 [2024-11-28 05:14:33.405060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.426802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.426852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:04.299 [2024-11-28 05:14:33.426875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.702 ms 00:25:04.299 [2024-11-28 05:14:33.426887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.426944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.426957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:04.299 [2024-11-28 05:14:33.426969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:04.299 [2024-11-28 05:14:33.426985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.427507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.427539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:04.299 [2024-11-28 05:14:33.427553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:25:04.299 [2024-11-28 05:14:33.427572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.427758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.427783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:04.299 [2024-11-28 05:14:33.427801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:25:04.299 [2024-11-28 05:14:33.427812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.435154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.435219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:04.299 [2024-11-28 05:14:33.435233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.316 ms 00:25:04.299 [2024-11-28 05:14:33.435244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.438309] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:04.299 [2024-11-28 05:14:33.438338] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:04.299 [2024-11-28 05:14:33.438347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.438354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:04.299 [2024-11-28 05:14:33.438360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.995 ms 00:25:04.299 [2024-11-28 05:14:33.438371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.449880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.449908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:04.299 [2024-11-28 05:14:33.449917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.477 ms 00:25:04.299 [2024-11-28 05:14:33.449924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.451687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.451711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:04.299 [2024-11-28 05:14:33.451719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.734 ms 00:25:04.299 [2024-11-28 05:14:33.451725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.452888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.452914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:04.299 [2024-11-28 05:14:33.452922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:25:04.299 [2024-11-28 05:14:33.452927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.453190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.453205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:04.299 [2024-11-28 05:14:33.453216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:25:04.299 [2024-11-28 05:14:33.453224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.471293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.471324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:04.299 [2024-11-28 05:14:33.471333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.055 ms 00:25:04.299 [2024-11-28 05:14:33.471340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.477203] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:04.299 [2024-11-28 05:14:33.479477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.479504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:04.299 [2024-11-28 05:14:33.479520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.105 ms 00:25:04.299 [2024-11-28 05:14:33.479527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.479567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.479576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:04.299 [2024-11-28 05:14:33.479587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:04.299 [2024-11-28 05:14:33.479596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.479675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.479685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:04.299 [2024-11-28 05:14:33.479691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:04.299 [2024-11-28 05:14:33.479698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.479715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.479726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:04.299 [2024-11-28 05:14:33.479733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:04.299 [2024-11-28 05:14:33.479741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.479769] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:04.299 [2024-11-28 05:14:33.479777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.479783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:04.299 [2024-11-28 05:14:33.479789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:04.299 [2024-11-28 05:14:33.479795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.483442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.483468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:04.299 [2024-11-28 05:14:33.483477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.632 ms 00:25:04.299 [2024-11-28 05:14:33.483483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.483545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.299 [2024-11-28 05:14:33.483553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:04.299 [2024-11-28 05:14:33.483559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:04.299 [2024-11-28 05:14:33.483565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.299 [2024-11-28 05:14:33.484734] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 104.084 ms, result 0 00:25:05.240  [2024-11-28T05:14:35.905Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-28T05:14:36.844Z] Copying: 43/1024 [MB] (23 MBps) [2024-11-28T05:14:37.784Z] Copying: 66/1024 [MB] (23 MBps) [2024-11-28T05:14:38.725Z] Copying: 92/1024 [MB] (25 MBps) [2024-11-28T05:14:39.667Z] Copying: 109/1024 [MB] (17 MBps) [2024-11-28T05:14:40.609Z] Copying: 132/1024 [MB] (22 MBps) [2024-11-28T05:14:41.547Z] Copying: 147/1024 [MB] (15 MBps) [2024-11-28T05:14:42.930Z] Copying: 172/1024 [MB] (24 MBps) [2024-11-28T05:14:43.501Z] Copying: 196/1024 [MB] (24 MBps) [2024-11-28T05:14:44.879Z] Copying: 209/1024 [MB] (12 MBps) [2024-11-28T05:14:45.813Z] Copying: 220/1024 [MB] (11 MBps) [2024-11-28T05:14:46.750Z] Copying: 232/1024 [MB] (11 MBps) [2024-11-28T05:14:47.692Z] Copying: 243/1024 [MB] (11 MBps) [2024-11-28T05:14:48.626Z] Copying: 256/1024 [MB] (12 MBps) [2024-11-28T05:14:49.567Z] Copying: 267/1024 [MB] (11 MBps) [2024-11-28T05:14:50.502Z] Copying: 277/1024 [MB] (10 MBps) [2024-11-28T05:14:51.880Z] Copying: 289/1024 [MB] (11 MBps) [2024-11-28T05:14:52.818Z] Copying: 301/1024 [MB] (11 MBps) [2024-11-28T05:14:53.756Z] Copying: 312/1024 [MB] (11 MBps) [2024-11-28T05:14:54.766Z] Copying: 323/1024 [MB] (11 MBps) [2024-11-28T05:14:55.704Z] Copying: 337/1024 [MB] (13 MBps) [2024-11-28T05:14:56.644Z] Copying: 348/1024 [MB] (11 MBps) [2024-11-28T05:14:57.582Z] Copying: 359/1024 [MB] (11 MBps) [2024-11-28T05:14:58.520Z] Copying: 370/1024 [MB] (10 MBps) [2024-11-28T05:14:59.902Z] Copying: 381/1024 [MB] (11 MBps) [2024-11-28T05:15:00.841Z] Copying: 391/1024 [MB] (10 MBps) [2024-11-28T05:15:01.780Z] Copying: 403/1024 [MB] (11 MBps) [2024-11-28T05:15:02.719Z] Copying: 414/1024 [MB] (11 MBps) [2024-11-28T05:15:03.657Z] Copying: 426/1024 [MB] (11 MBps) [2024-11-28T05:15:04.596Z] Copying: 438/1024 [MB] (11 MBps) [2024-11-28T05:15:05.532Z] Copying: 449/1024 [MB] (11 MBps) [2024-11-28T05:15:06.910Z] Copying: 461/1024 [MB] (11 MBps) [2024-11-28T05:15:07.850Z] Copying: 472/1024 [MB] (11 MBps) [2024-11-28T05:15:08.789Z] Copying: 483/1024 [MB] (11 MBps) [2024-11-28T05:15:09.729Z] Copying: 495/1024 [MB] (11 MBps) [2024-11-28T05:15:10.667Z] Copying: 506/1024 [MB] (11 MBps) [2024-11-28T05:15:11.610Z] Copying: 518/1024 [MB] (11 MBps) [2024-11-28T05:15:12.551Z] Copying: 528/1024 [MB] (10 MBps) [2024-11-28T05:15:13.935Z] Copying: 539/1024 [MB] (10 MBps) [2024-11-28T05:15:14.507Z] Copying: 549/1024 [MB] (10 MBps) [2024-11-28T05:15:15.890Z] Copying: 560/1024 [MB] (10 MBps) [2024-11-28T05:15:16.827Z] Copying: 572/1024 [MB] (12 MBps) [2024-11-28T05:15:17.766Z] Copying: 585/1024 [MB] (13 MBps) [2024-11-28T05:15:18.705Z] Copying: 597/1024 [MB] (11 MBps) [2024-11-28T05:15:19.644Z] Copying: 608/1024 [MB] (11 MBps) [2024-11-28T05:15:20.585Z] Copying: 619/1024 [MB] (11 MBps) [2024-11-28T05:15:21.530Z] Copying: 631/1024 [MB] (11 MBps) [2024-11-28T05:15:22.909Z] Copying: 642/1024 [MB] (11 MBps) [2024-11-28T05:15:23.847Z] Copying: 653/1024 [MB] (11 MBps) [2024-11-28T05:15:24.785Z] Copying: 665/1024 [MB] (11 MBps) [2024-11-28T05:15:25.731Z] Copying: 676/1024 [MB] (11 MBps) [2024-11-28T05:15:26.752Z] Copying: 687/1024 [MB] (11 MBps) [2024-11-28T05:15:27.688Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-28T05:15:28.627Z] Copying: 710/1024 [MB] (11 MBps) [2024-11-28T05:15:29.565Z] Copying: 722/1024 [MB] (11 MBps) [2024-11-28T05:15:30.503Z] Copying: 733/1024 [MB] (11 MBps) [2024-11-28T05:15:31.886Z] Copying: 744/1024 [MB] (11 MBps) [2024-11-28T05:15:32.825Z] Copying: 756/1024 [MB] (11 MBps) [2024-11-28T05:15:33.764Z] Copying: 767/1024 [MB] (11 MBps) [2024-11-28T05:15:34.707Z] Copying: 779/1024 [MB] (11 MBps) [2024-11-28T05:15:35.655Z] Copying: 791/1024 [MB] (11 MBps) [2024-11-28T05:15:36.594Z] Copying: 802/1024 [MB] (11 MBps) [2024-11-28T05:15:37.534Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-28T05:15:38.918Z] Copying: 826/1024 [MB] (11 MBps) [2024-11-28T05:15:39.858Z] Copying: 837/1024 [MB] (11 MBps) [2024-11-28T05:15:40.800Z] Copying: 849/1024 [MB] (11 MBps) [2024-11-28T05:15:41.738Z] Copying: 860/1024 [MB] (11 MBps) [2024-11-28T05:15:42.678Z] Copying: 872/1024 [MB] (11 MBps) [2024-11-28T05:15:43.615Z] Copying: 883/1024 [MB] (11 MBps) [2024-11-28T05:15:44.552Z] Copying: 895/1024 [MB] (11 MBps) [2024-11-28T05:15:45.929Z] Copying: 906/1024 [MB] (11 MBps) [2024-11-28T05:15:46.499Z] Copying: 918/1024 [MB] (11 MBps) [2024-11-28T05:15:47.880Z] Copying: 930/1024 [MB] (11 MBps) [2024-11-28T05:15:48.819Z] Copying: 941/1024 [MB] (11 MBps) [2024-11-28T05:15:49.783Z] Copying: 952/1024 [MB] (11 MBps) [2024-11-28T05:15:50.721Z] Copying: 964/1024 [MB] (11 MBps) [2024-11-28T05:15:51.659Z] Copying: 975/1024 [MB] (11 MBps) [2024-11-28T05:15:52.598Z] Copying: 986/1024 [MB] (11 MBps) [2024-11-28T05:15:53.538Z] Copying: 998/1024 [MB] (11 MBps) [2024-11-28T05:15:54.919Z] Copying: 1009/1024 [MB] (11 MBps) [2024-11-28T05:15:55.488Z] Copying: 1020/1024 [MB] (10 MBps) [2024-11-28T05:15:55.488Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-28 05:15:55.445776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.204 [2024-11-28 05:15:55.445850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:26.204 [2024-11-28 05:15:55.445867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:26.204 [2024-11-28 05:15:55.445876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.204 [2024-11-28 05:15:55.447898] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:26.204 [2024-11-28 05:15:55.451725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.204 [2024-11-28 05:15:55.451761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:26.204 [2024-11-28 05:15:55.451771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.789 ms 00:26:26.204 [2024-11-28 05:15:55.451780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.204 [2024-11-28 05:15:55.463742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.204 [2024-11-28 05:15:55.463776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:26.204 [2024-11-28 05:15:55.463787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.094 ms 00:26:26.204 [2024-11-28 05:15:55.463795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.204 [2024-11-28 05:15:55.484916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.205 [2024-11-28 05:15:55.484958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:26.205 [2024-11-28 05:15:55.484969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.100 ms 00:26:26.205 [2024-11-28 05:15:55.484977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.465 [2024-11-28 05:15:55.491128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.465 [2024-11-28 05:15:55.491163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:26.465 [2024-11-28 05:15:55.491174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.124 ms 00:26:26.465 [2024-11-28 05:15:55.491191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.465 [2024-11-28 05:15:55.493558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.465 [2024-11-28 05:15:55.493589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:26.465 [2024-11-28 05:15:55.493598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.297 ms 00:26:26.465 [2024-11-28 05:15:55.493606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.465 [2024-11-28 05:15:55.497818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.465 [2024-11-28 05:15:55.497848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:26.465 [2024-11-28 05:15:55.497865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.171 ms 00:26:26.465 [2024-11-28 05:15:55.497872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.465 [2024-11-28 05:15:55.747226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.465 [2024-11-28 05:15:55.747285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:26.465 [2024-11-28 05:15:55.747312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 249.313 ms 00:26:26.465 [2024-11-28 05:15:55.747323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.728 [2024-11-28 05:15:55.750719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.728 [2024-11-28 05:15:55.750764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:26.728 [2024-11-28 05:15:55.750776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.377 ms 00:26:26.728 [2024-11-28 05:15:55.750784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.728 [2024-11-28 05:15:55.753814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.728 [2024-11-28 05:15:55.753858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:26.728 [2024-11-28 05:15:55.753870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.984 ms 00:26:26.728 [2024-11-28 05:15:55.753878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.728 [2024-11-28 05:15:55.756242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.728 [2024-11-28 05:15:55.756284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:26.728 [2024-11-28 05:15:55.756294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.319 ms 00:26:26.728 [2024-11-28 05:15:55.756301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.728 [2024-11-28 05:15:55.758824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.728 [2024-11-28 05:15:55.758867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:26.728 [2024-11-28 05:15:55.758876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.449 ms 00:26:26.728 [2024-11-28 05:15:55.758885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.728 [2024-11-28 05:15:55.758924] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:26.728 [2024-11-28 05:15:55.758952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 89344 / 261120 wr_cnt: 1 state: open 00:26:26.728 [2024-11-28 05:15:55.758969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.758978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.758987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.758995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:26.728 [2024-11-28 05:15:55.759203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:26.729 [2024-11-28 05:15:55.759804] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:26.729 [2024-11-28 05:15:55.759828] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8895fcb2-4145-407f-a2a5-484f995f7d99 00:26:26.729 [2024-11-28 05:15:55.759836] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 89344 00:26:26.729 [2024-11-28 05:15:55.759844] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 90304 00:26:26.729 [2024-11-28 05:15:55.759852] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 89344 00:26:26.729 [2024-11-28 05:15:55.759861] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0107 00:26:26.729 [2024-11-28 05:15:55.759869] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:26.729 [2024-11-28 05:15:55.759879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:26.729 [2024-11-28 05:15:55.759887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:26.729 [2024-11-28 05:15:55.759894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:26.729 [2024-11-28 05:15:55.759900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:26.729 [2024-11-28 05:15:55.759907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.729 [2024-11-28 05:15:55.759915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:26.729 [2024-11-28 05:15:55.759927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:26:26.729 [2024-11-28 05:15:55.759934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.729 [2024-11-28 05:15:55.763123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.729 [2024-11-28 05:15:55.763163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:26.729 [2024-11-28 05:15:55.763194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.170 ms 00:26:26.729 [2024-11-28 05:15:55.763205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.763368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.730 [2024-11-28 05:15:55.763388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:26.730 [2024-11-28 05:15:55.763400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:26:26.730 [2024-11-28 05:15:55.763415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.773753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.730 [2024-11-28 05:15:55.773798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:26.730 [2024-11-28 05:15:55.773818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.730 [2024-11-28 05:15:55.773827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.773894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.730 [2024-11-28 05:15:55.773905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:26.730 [2024-11-28 05:15:55.773915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.730 [2024-11-28 05:15:55.773929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.774014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.730 [2024-11-28 05:15:55.774029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:26.730 [2024-11-28 05:15:55.774038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.730 [2024-11-28 05:15:55.774048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.774065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.730 [2024-11-28 05:15:55.774077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:26.730 [2024-11-28 05:15:55.774085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.730 [2024-11-28 05:15:55.774093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.793801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.730 [2024-11-28 05:15:55.793854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:26.730 [2024-11-28 05:15:55.793869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.730 [2024-11-28 05:15:55.793879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.808980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.730 [2024-11-28 05:15:55.809044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:26.730 [2024-11-28 05:15:55.809059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.730 [2024-11-28 05:15:55.809069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.809168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.730 [2024-11-28 05:15:55.809242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:26.730 [2024-11-28 05:15:55.809253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.730 [2024-11-28 05:15:55.809262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.809307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.730 [2024-11-28 05:15:55.809328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:26.730 [2024-11-28 05:15:55.809342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.730 [2024-11-28 05:15:55.809350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.809435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.730 [2024-11-28 05:15:55.809449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:26.730 [2024-11-28 05:15:55.809459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.730 [2024-11-28 05:15:55.809471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.809508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.730 [2024-11-28 05:15:55.809522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:26.730 [2024-11-28 05:15:55.809531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.730 [2024-11-28 05:15:55.809546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.809601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.730 [2024-11-28 05:15:55.809614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:26.730 [2024-11-28 05:15:55.809640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.730 [2024-11-28 05:15:55.809659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.809721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:26.730 [2024-11-28 05:15:55.809736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:26.730 [2024-11-28 05:15:55.809752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:26.730 [2024-11-28 05:15:55.809761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.730 [2024-11-28 05:15:55.809935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 365.034 ms, result 0 00:26:27.303 00:26:27.303 00:26:27.303 05:15:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:29.286 05:15:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:29.547 [2024-11-28 05:15:58.614960] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:26:29.547 [2024-11-28 05:15:58.615086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92452 ] 00:26:29.547 [2024-11-28 05:15:58.753133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.547 [2024-11-28 05:15:58.788122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.808 [2024-11-28 05:15:58.934817] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:29.809 [2024-11-28 05:15:58.934912] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:30.072 [2024-11-28 05:15:59.098800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.072 [2024-11-28 05:15:59.098866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:30.072 [2024-11-28 05:15:59.098883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:30.072 [2024-11-28 05:15:59.098892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.072 [2024-11-28 05:15:59.098962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.072 [2024-11-28 05:15:59.098974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:30.072 [2024-11-28 05:15:59.098988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:30.072 [2024-11-28 05:15:59.099004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.072 [2024-11-28 05:15:59.099039] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:30.072 [2024-11-28 05:15:59.099347] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:30.072 [2024-11-28 05:15:59.099371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.072 [2024-11-28 05:15:59.099380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:30.072 [2024-11-28 05:15:59.099393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:26:30.072 [2024-11-28 05:15:59.099402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.072 [2024-11-28 05:15:59.101722] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:30.072 [2024-11-28 05:15:59.106388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.072 [2024-11-28 05:15:59.106441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:30.072 [2024-11-28 05:15:59.106454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.668 ms 00:26:30.072 [2024-11-28 05:15:59.106482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.072 [2024-11-28 05:15:59.106562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.072 [2024-11-28 05:15:59.106578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:30.072 [2024-11-28 05:15:59.106588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:30.072 [2024-11-28 05:15:59.106597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.072 [2024-11-28 05:15:59.118019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.072 [2024-11-28 05:15:59.118063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:30.072 [2024-11-28 05:15:59.118079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.374 ms 00:26:30.072 [2024-11-28 05:15:59.118089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.072 [2024-11-28 05:15:59.118227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.072 [2024-11-28 05:15:59.118243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:30.072 [2024-11-28 05:15:59.118253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:26:30.072 [2024-11-28 05:15:59.118262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.072 [2024-11-28 05:15:59.118324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.072 [2024-11-28 05:15:59.118342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:30.072 [2024-11-28 05:15:59.118352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:30.072 [2024-11-28 05:15:59.118365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.073 [2024-11-28 05:15:59.118389] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:30.073 [2024-11-28 05:15:59.121040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.073 [2024-11-28 05:15:59.121083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:30.073 [2024-11-28 05:15:59.121095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.657 ms 00:26:30.073 [2024-11-28 05:15:59.121104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.073 [2024-11-28 05:15:59.121142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.073 [2024-11-28 05:15:59.121159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:30.073 [2024-11-28 05:15:59.121168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:30.073 [2024-11-28 05:15:59.121197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.073 [2024-11-28 05:15:59.121222] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:30.073 [2024-11-28 05:15:59.121256] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:30.073 [2024-11-28 05:15:59.121309] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:30.073 [2024-11-28 05:15:59.121333] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:30.073 [2024-11-28 05:15:59.121445] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:30.073 [2024-11-28 05:15:59.121459] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:30.073 [2024-11-28 05:15:59.121474] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:30.073 [2024-11-28 05:15:59.121486] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:30.073 [2024-11-28 05:15:59.121496] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:30.073 [2024-11-28 05:15:59.121505] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:30.073 [2024-11-28 05:15:59.121516] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:30.073 [2024-11-28 05:15:59.121525] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:30.073 [2024-11-28 05:15:59.121534] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:30.073 [2024-11-28 05:15:59.121546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.073 [2024-11-28 05:15:59.121555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:30.073 [2024-11-28 05:15:59.121566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:26:30.073 [2024-11-28 05:15:59.121575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.073 [2024-11-28 05:15:59.121680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.073 [2024-11-28 05:15:59.121692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:30.073 [2024-11-28 05:15:59.121702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:26:30.073 [2024-11-28 05:15:59.121712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.073 [2024-11-28 05:15:59.121820] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:30.073 [2024-11-28 05:15:59.121840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:30.073 [2024-11-28 05:15:59.121850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:30.073 [2024-11-28 05:15:59.121864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.073 [2024-11-28 05:15:59.121876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:30.073 [2024-11-28 05:15:59.121886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:30.073 [2024-11-28 05:15:59.121894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:30.073 [2024-11-28 05:15:59.121902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:30.073 [2024-11-28 05:15:59.121914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:30.073 [2024-11-28 05:15:59.121924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:30.073 [2024-11-28 05:15:59.121933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:30.073 [2024-11-28 05:15:59.121941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:30.073 [2024-11-28 05:15:59.121949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:30.073 [2024-11-28 05:15:59.121957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:30.073 [2024-11-28 05:15:59.121966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:30.073 [2024-11-28 05:15:59.121980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.073 [2024-11-28 05:15:59.121988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:30.073 [2024-11-28 05:15:59.121996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:30.073 [2024-11-28 05:15:59.122004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.073 [2024-11-28 05:15:59.122012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:30.073 [2024-11-28 05:15:59.122020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:30.073 [2024-11-28 05:15:59.122030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.073 [2024-11-28 05:15:59.122038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:30.073 [2024-11-28 05:15:59.122047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:30.073 [2024-11-28 05:15:59.122059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.073 [2024-11-28 05:15:59.122067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:30.073 [2024-11-28 05:15:59.122078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:30.073 [2024-11-28 05:15:59.122087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.073 [2024-11-28 05:15:59.122094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:30.073 [2024-11-28 05:15:59.122101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:30.073 [2024-11-28 05:15:59.122108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.073 [2024-11-28 05:15:59.122114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:30.073 [2024-11-28 05:15:59.122121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:30.073 [2024-11-28 05:15:59.122127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:30.073 [2024-11-28 05:15:59.122136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:30.073 [2024-11-28 05:15:59.122142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:30.073 [2024-11-28 05:15:59.122149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:30.073 [2024-11-28 05:15:59.122156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:30.073 [2024-11-28 05:15:59.122163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:30.073 [2024-11-28 05:15:59.122169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.073 [2024-11-28 05:15:59.122194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:30.073 [2024-11-28 05:15:59.122203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:30.073 [2024-11-28 05:15:59.122209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.073 [2024-11-28 05:15:59.122215] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:30.073 [2024-11-28 05:15:59.122227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:30.073 [2024-11-28 05:15:59.122234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:30.073 [2024-11-28 05:15:59.122241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.073 [2024-11-28 05:15:59.122253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:30.073 [2024-11-28 05:15:59.122264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:30.073 [2024-11-28 05:15:59.122270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:30.073 [2024-11-28 05:15:59.122277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:30.073 [2024-11-28 05:15:59.122284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:30.073 [2024-11-28 05:15:59.122292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:30.073 [2024-11-28 05:15:59.122301] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:30.073 [2024-11-28 05:15:59.122318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.073 [2024-11-28 05:15:59.122331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:30.073 [2024-11-28 05:15:59.122343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:30.073 [2024-11-28 05:15:59.122351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:30.073 [2024-11-28 05:15:59.122358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:30.073 [2024-11-28 05:15:59.122367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:30.073 [2024-11-28 05:15:59.122376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:30.073 [2024-11-28 05:15:59.122384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:30.073 [2024-11-28 05:15:59.122392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:30.073 [2024-11-28 05:15:59.122401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:30.073 [2024-11-28 05:15:59.122417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:30.073 [2024-11-28 05:15:59.122426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:30.073 [2024-11-28 05:15:59.122434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:30.073 [2024-11-28 05:15:59.122441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:30.073 [2024-11-28 05:15:59.122449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:30.074 [2024-11-28 05:15:59.122456] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:30.074 [2024-11-28 05:15:59.122465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.074 [2024-11-28 05:15:59.122475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:30.074 [2024-11-28 05:15:59.122486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:30.074 [2024-11-28 05:15:59.122493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:30.074 [2024-11-28 05:15:59.122500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:30.074 [2024-11-28 05:15:59.122507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.122515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:30.074 [2024-11-28 05:15:59.122523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:26:30.074 [2024-11-28 05:15:59.122535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.142498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.142546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:30.074 [2024-11-28 05:15:59.142558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.898 ms 00:26:30.074 [2024-11-28 05:15:59.142567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.142663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.142672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:30.074 [2024-11-28 05:15:59.142681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:30.074 [2024-11-28 05:15:59.142690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.168601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.168675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:30.074 [2024-11-28 05:15:59.168696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.848 ms 00:26:30.074 [2024-11-28 05:15:59.168710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.168781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.168810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:30.074 [2024-11-28 05:15:59.168825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:30.074 [2024-11-28 05:15:59.168839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.169691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.169742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:30.074 [2024-11-28 05:15:59.169760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:26:30.074 [2024-11-28 05:15:59.169773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.170005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.170023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:30.074 [2024-11-28 05:15:59.170036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:26:30.074 [2024-11-28 05:15:59.170049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.181361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.181406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:30.074 [2024-11-28 05:15:59.181418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.281 ms 00:26:30.074 [2024-11-28 05:15:59.181436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.186356] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:30.074 [2024-11-28 05:15:59.186410] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:30.074 [2024-11-28 05:15:59.186430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.186440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:30.074 [2024-11-28 05:15:59.186450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.868 ms 00:26:30.074 [2024-11-28 05:15:59.186458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.203024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.203077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:30.074 [2024-11-28 05:15:59.203089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.508 ms 00:26:30.074 [2024-11-28 05:15:59.203099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.206077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.206130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:30.074 [2024-11-28 05:15:59.206141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.920 ms 00:26:30.074 [2024-11-28 05:15:59.206150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.208660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.208723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:30.074 [2024-11-28 05:15:59.208735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.442 ms 00:26:30.074 [2024-11-28 05:15:59.208743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.209278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.209328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:30.074 [2024-11-28 05:15:59.209342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:26:30.074 [2024-11-28 05:15:59.209360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.241512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.241567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:30.074 [2024-11-28 05:15:59.241581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.125 ms 00:26:30.074 [2024-11-28 05:15:59.241590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.250389] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:30.074 [2024-11-28 05:15:59.254020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.254064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:30.074 [2024-11-28 05:15:59.254078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.366 ms 00:26:30.074 [2024-11-28 05:15:59.254087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.254172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.254206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:30.074 [2024-11-28 05:15:59.254218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:30.074 [2024-11-28 05:15:59.254240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.256270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.256322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:30.074 [2024-11-28 05:15:59.256334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.987 ms 00:26:30.074 [2024-11-28 05:15:59.256343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.256382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.256399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:30.074 [2024-11-28 05:15:59.256409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:30.074 [2024-11-28 05:15:59.256418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.256467] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:30.074 [2024-11-28 05:15:59.256479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.256493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:30.074 [2024-11-28 05:15:59.256507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:30.074 [2024-11-28 05:15:59.256523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.263025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.263078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:30.074 [2024-11-28 05:15:59.263090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.474 ms 00:26:30.074 [2024-11-28 05:15:59.263100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.263217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.074 [2024-11-28 05:15:59.263232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:30.074 [2024-11-28 05:15:59.263242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:30.074 [2024-11-28 05:15:59.263256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.074 [2024-11-28 05:15:59.264690] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 165.309 ms, result 0 00:26:31.460  [2024-11-28T05:16:01.688Z] Copying: 976/1048576 [kB] (976 kBps) [2024-11-28T05:16:02.629Z] Copying: 3552/1048576 [kB] (2576 kBps) [2024-11-28T05:16:03.567Z] Copying: 11916/1048576 [kB] (8364 kBps) [2024-11-28T05:16:04.507Z] Copying: 36/1024 [MB] (24 MBps) [2024-11-28T05:16:05.889Z] Copying: 62/1024 [MB] (26 MBps) [2024-11-28T05:16:06.459Z] Copying: 87/1024 [MB] (24 MBps) [2024-11-28T05:16:07.845Z] Copying: 118/1024 [MB] (30 MBps) [2024-11-28T05:16:08.785Z] Copying: 146/1024 [MB] (28 MBps) [2024-11-28T05:16:09.722Z] Copying: 177/1024 [MB] (31 MBps) [2024-11-28T05:16:10.661Z] Copying: 200/1024 [MB] (22 MBps) [2024-11-28T05:16:11.603Z] Copying: 228/1024 [MB] (28 MBps) [2024-11-28T05:16:12.544Z] Copying: 246/1024 [MB] (18 MBps) [2024-11-28T05:16:13.486Z] Copying: 264/1024 [MB] (18 MBps) [2024-11-28T05:16:14.870Z] Copying: 282/1024 [MB] (18 MBps) [2024-11-28T05:16:15.814Z] Copying: 301/1024 [MB] (18 MBps) [2024-11-28T05:16:16.757Z] Copying: 320/1024 [MB] (18 MBps) [2024-11-28T05:16:17.701Z] Copying: 338/1024 [MB] (18 MBps) [2024-11-28T05:16:18.643Z] Copying: 355/1024 [MB] (17 MBps) [2024-11-28T05:16:19.585Z] Copying: 373/1024 [MB] (17 MBps) [2024-11-28T05:16:20.527Z] Copying: 391/1024 [MB] (18 MBps) [2024-11-28T05:16:21.469Z] Copying: 414/1024 [MB] (23 MBps) [2024-11-28T05:16:22.852Z] Copying: 431/1024 [MB] (16 MBps) [2024-11-28T05:16:23.798Z] Copying: 450/1024 [MB] (19 MBps) [2024-11-28T05:16:24.739Z] Copying: 479/1024 [MB] (28 MBps) [2024-11-28T05:16:25.679Z] Copying: 504/1024 [MB] (24 MBps) [2024-11-28T05:16:26.622Z] Copying: 532/1024 [MB] (28 MBps) [2024-11-28T05:16:27.568Z] Copying: 552/1024 [MB] (19 MBps) [2024-11-28T05:16:28.512Z] Copying: 568/1024 [MB] (16 MBps) [2024-11-28T05:16:29.506Z] Copying: 593/1024 [MB] (24 MBps) [2024-11-28T05:16:30.454Z] Copying: 618/1024 [MB] (24 MBps) [2024-11-28T05:16:31.839Z] Copying: 640/1024 [MB] (22 MBps) [2024-11-28T05:16:32.780Z] Copying: 665/1024 [MB] (25 MBps) [2024-11-28T05:16:33.723Z] Copying: 691/1024 [MB] (25 MBps) [2024-11-28T05:16:34.662Z] Copying: 719/1024 [MB] (28 MBps) [2024-11-28T05:16:35.598Z] Copying: 748/1024 [MB] (29 MBps) [2024-11-28T05:16:36.537Z] Copying: 782/1024 [MB] (34 MBps) [2024-11-28T05:16:37.474Z] Copying: 806/1024 [MB] (23 MBps) [2024-11-28T05:16:38.853Z] Copying: 824/1024 [MB] (17 MBps) [2024-11-28T05:16:39.793Z] Copying: 841/1024 [MB] (17 MBps) [2024-11-28T05:16:40.732Z] Copying: 858/1024 [MB] (16 MBps) [2024-11-28T05:16:41.672Z] Copying: 876/1024 [MB] (18 MBps) [2024-11-28T05:16:42.617Z] Copying: 893/1024 [MB] (17 MBps) [2024-11-28T05:16:43.563Z] Copying: 909/1024 [MB] (15 MBps) [2024-11-28T05:16:44.506Z] Copying: 925/1024 [MB] (15 MBps) [2024-11-28T05:16:45.892Z] Copying: 941/1024 [MB] (16 MBps) [2024-11-28T05:16:46.466Z] Copying: 967/1024 [MB] (25 MBps) [2024-11-28T05:16:47.860Z] Copying: 983/1024 [MB] (16 MBps) [2024-11-28T05:16:48.801Z] Copying: 999/1024 [MB] (15 MBps) [2024-11-28T05:16:49.063Z] Copying: 1015/1024 [MB] (15 MBps) [2024-11-28T05:16:49.326Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-28 05:16:49.249909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.042 [2024-11-28 05:16:49.249994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:20.042 [2024-11-28 05:16:49.250012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:20.042 [2024-11-28 05:16:49.250021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.042 [2024-11-28 05:16:49.250046] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:20.042 [2024-11-28 05:16:49.250904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.042 [2024-11-28 05:16:49.250935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:20.042 [2024-11-28 05:16:49.250949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:27:20.042 [2024-11-28 05:16:49.250958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.042 [2024-11-28 05:16:49.251224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.042 [2024-11-28 05:16:49.251247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:20.042 [2024-11-28 05:16:49.251257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:27:20.042 [2024-11-28 05:16:49.251267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.042 [2024-11-28 05:16:49.265769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.042 [2024-11-28 05:16:49.265813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:20.042 [2024-11-28 05:16:49.265833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.482 ms 00:27:20.042 [2024-11-28 05:16:49.265841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.042 [2024-11-28 05:16:49.272112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.042 [2024-11-28 05:16:49.272155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:20.042 [2024-11-28 05:16:49.272167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.233 ms 00:27:20.042 [2024-11-28 05:16:49.272175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.042 [2024-11-28 05:16:49.275260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.042 [2024-11-28 05:16:49.275301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:20.042 [2024-11-28 05:16:49.275312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:27:20.042 [2024-11-28 05:16:49.275320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.042 [2024-11-28 05:16:49.280469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.042 [2024-11-28 05:16:49.280519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:20.042 [2024-11-28 05:16:49.280530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.107 ms 00:27:20.043 [2024-11-28 05:16:49.280539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.043 [2024-11-28 05:16:49.282403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.043 [2024-11-28 05:16:49.282441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:20.043 [2024-11-28 05:16:49.282453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.819 ms 00:27:20.043 [2024-11-28 05:16:49.282461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.043 [2024-11-28 05:16:49.284815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.043 [2024-11-28 05:16:49.284859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:20.043 [2024-11-28 05:16:49.284870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.323 ms 00:27:20.043 [2024-11-28 05:16:49.284879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.043 [2024-11-28 05:16:49.286720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.043 [2024-11-28 05:16:49.286758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:20.043 [2024-11-28 05:16:49.286767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:27:20.043 [2024-11-28 05:16:49.286774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.043 [2024-11-28 05:16:49.288368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.043 [2024-11-28 05:16:49.288406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:20.043 [2024-11-28 05:16:49.288415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.555 ms 00:27:20.043 [2024-11-28 05:16:49.288423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.043 [2024-11-28 05:16:49.290037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.043 [2024-11-28 05:16:49.290077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:20.043 [2024-11-28 05:16:49.290086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:27:20.043 [2024-11-28 05:16:49.290094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.043 [2024-11-28 05:16:49.290132] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:20.043 [2024-11-28 05:16:49.290147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:20.043 [2024-11-28 05:16:49.290158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:20.043 [2024-11-28 05:16:49.290168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:20.043 [2024-11-28 05:16:49.290745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:20.044 [2024-11-28 05:16:49.290987] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:20.044 [2024-11-28 05:16:49.290998] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8895fcb2-4145-407f-a2a5-484f995f7d99 00:27:20.044 [2024-11-28 05:16:49.291010] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:20.044 [2024-11-28 05:16:49.291018] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 175296 00:27:20.044 [2024-11-28 05:16:49.291026] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 173312 00:27:20.044 [2024-11-28 05:16:49.291034] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0114 00:27:20.044 [2024-11-28 05:16:49.291044] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:20.044 [2024-11-28 05:16:49.291052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:20.044 [2024-11-28 05:16:49.291060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:20.044 [2024-11-28 05:16:49.291067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:20.044 [2024-11-28 05:16:49.291075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:20.044 [2024-11-28 05:16:49.291082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.044 [2024-11-28 05:16:49.291091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:20.044 [2024-11-28 05:16:49.291105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:27:20.044 [2024-11-28 05:16:49.291113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.044 [2024-11-28 05:16:49.293401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.044 [2024-11-28 05:16:49.293437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:20.044 [2024-11-28 05:16:49.293449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.267 ms 00:27:20.044 [2024-11-28 05:16:49.293466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.044 [2024-11-28 05:16:49.293589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.044 [2024-11-28 05:16:49.293614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:20.044 [2024-11-28 05:16:49.293628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:27:20.044 [2024-11-28 05:16:49.293635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.044 [2024-11-28 05:16:49.300895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.044 [2024-11-28 05:16:49.300935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:20.044 [2024-11-28 05:16:49.300946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.044 [2024-11-28 05:16:49.300955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.044 [2024-11-28 05:16:49.301014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.044 [2024-11-28 05:16:49.301023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:20.044 [2024-11-28 05:16:49.301038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.044 [2024-11-28 05:16:49.301046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.044 [2024-11-28 05:16:49.301107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.044 [2024-11-28 05:16:49.301117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:20.044 [2024-11-28 05:16:49.301126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.044 [2024-11-28 05:16:49.301134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.044 [2024-11-28 05:16:49.301152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.044 [2024-11-28 05:16:49.301160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:20.044 [2024-11-28 05:16:49.301169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.044 [2024-11-28 05:16:49.301203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.044 [2024-11-28 05:16:49.314881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.044 [2024-11-28 05:16:49.314928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:20.044 [2024-11-28 05:16:49.314941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.044 [2024-11-28 05:16:49.314949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.305 [2024-11-28 05:16:49.326234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.305 [2024-11-28 05:16:49.326296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:20.305 [2024-11-28 05:16:49.326312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.305 [2024-11-28 05:16:49.326321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.305 [2024-11-28 05:16:49.326371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.305 [2024-11-28 05:16:49.326380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:20.305 [2024-11-28 05:16:49.326389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.305 [2024-11-28 05:16:49.326397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.305 [2024-11-28 05:16:49.326432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.305 [2024-11-28 05:16:49.326441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:20.305 [2024-11-28 05:16:49.326450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.305 [2024-11-28 05:16:49.326458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.305 [2024-11-28 05:16:49.326539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.305 [2024-11-28 05:16:49.326549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:20.305 [2024-11-28 05:16:49.326558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.305 [2024-11-28 05:16:49.326566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.305 [2024-11-28 05:16:49.326596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.305 [2024-11-28 05:16:49.326606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:20.305 [2024-11-28 05:16:49.326614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.305 [2024-11-28 05:16:49.326622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.305 [2024-11-28 05:16:49.326670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.305 [2024-11-28 05:16:49.326680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:20.305 [2024-11-28 05:16:49.326688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.305 [2024-11-28 05:16:49.326696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.305 [2024-11-28 05:16:49.326747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.305 [2024-11-28 05:16:49.326758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:20.305 [2024-11-28 05:16:49.326767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.305 [2024-11-28 05:16:49.326778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.305 [2024-11-28 05:16:49.326935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 76.982 ms, result 0 00:27:20.305 00:27:20.305 00:27:20.305 05:16:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:22.852 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:22.852 05:16:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:22.852 [2024-11-28 05:16:51.880935] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:27:22.852 [2024-11-28 05:16:51.881287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93000 ] 00:27:22.852 [2024-11-28 05:16:52.028526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.852 [2024-11-28 05:16:52.070415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.113 [2024-11-28 05:16:52.186435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:23.113 [2024-11-28 05:16:52.186511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:23.113 [2024-11-28 05:16:52.347079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.113 [2024-11-28 05:16:52.347135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:23.113 [2024-11-28 05:16:52.347153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:23.113 [2024-11-28 05:16:52.347162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.113 [2024-11-28 05:16:52.347235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.113 [2024-11-28 05:16:52.347247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:23.113 [2024-11-28 05:16:52.347256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:23.113 [2024-11-28 05:16:52.347269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.113 [2024-11-28 05:16:52.347296] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:23.113 [2024-11-28 05:16:52.347679] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:23.113 [2024-11-28 05:16:52.347719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.113 [2024-11-28 05:16:52.347728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:23.114 [2024-11-28 05:16:52.347745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:27:23.114 [2024-11-28 05:16:52.347752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.114 [2024-11-28 05:16:52.349543] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:23.114 [2024-11-28 05:16:52.353302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.114 [2024-11-28 05:16:52.353345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:23.114 [2024-11-28 05:16:52.353356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.762 ms 00:27:23.114 [2024-11-28 05:16:52.353374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.114 [2024-11-28 05:16:52.353442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.114 [2024-11-28 05:16:52.353452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:23.114 [2024-11-28 05:16:52.353462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:23.114 [2024-11-28 05:16:52.353474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.114 [2024-11-28 05:16:52.361545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.114 [2024-11-28 05:16:52.361579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:23.114 [2024-11-28 05:16:52.361604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.028 ms 00:27:23.114 [2024-11-28 05:16:52.361612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.114 [2024-11-28 05:16:52.361703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.114 [2024-11-28 05:16:52.361712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:23.114 [2024-11-28 05:16:52.361723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:23.114 [2024-11-28 05:16:52.361731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.114 [2024-11-28 05:16:52.361789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.114 [2024-11-28 05:16:52.361800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:23.114 [2024-11-28 05:16:52.361809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:23.114 [2024-11-28 05:16:52.361824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.114 [2024-11-28 05:16:52.361851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:23.114 [2024-11-28 05:16:52.363894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.114 [2024-11-28 05:16:52.363927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:23.114 [2024-11-28 05:16:52.363938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:27:23.114 [2024-11-28 05:16:52.363946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.114 [2024-11-28 05:16:52.363985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.114 [2024-11-28 05:16:52.363997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:23.114 [2024-11-28 05:16:52.364009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:23.114 [2024-11-28 05:16:52.364019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.114 [2024-11-28 05:16:52.364040] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:23.114 [2024-11-28 05:16:52.364061] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:23.114 [2024-11-28 05:16:52.364102] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:23.114 [2024-11-28 05:16:52.364121] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:23.114 [2024-11-28 05:16:52.364255] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:23.114 [2024-11-28 05:16:52.364268] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:23.114 [2024-11-28 05:16:52.364281] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:23.114 [2024-11-28 05:16:52.364292] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:23.114 [2024-11-28 05:16:52.364301] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:23.114 [2024-11-28 05:16:52.364309] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:23.114 [2024-11-28 05:16:52.364317] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:23.114 [2024-11-28 05:16:52.364325] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:23.114 [2024-11-28 05:16:52.364334] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:23.114 [2024-11-28 05:16:52.364342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.114 [2024-11-28 05:16:52.364355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:23.114 [2024-11-28 05:16:52.364366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:27:23.114 [2024-11-28 05:16:52.364374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.114 [2024-11-28 05:16:52.364459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.114 [2024-11-28 05:16:52.364470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:23.114 [2024-11-28 05:16:52.364479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:23.114 [2024-11-28 05:16:52.364486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.114 [2024-11-28 05:16:52.364589] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:23.114 [2024-11-28 05:16:52.364600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:23.114 [2024-11-28 05:16:52.364610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:23.114 [2024-11-28 05:16:52.364619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:23.114 [2024-11-28 05:16:52.364636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:23.114 [2024-11-28 05:16:52.364653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:23.114 [2024-11-28 05:16:52.364661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:23.114 [2024-11-28 05:16:52.364680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:23.114 [2024-11-28 05:16:52.364688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:23.114 [2024-11-28 05:16:52.364695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:23.114 [2024-11-28 05:16:52.364703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:23.114 [2024-11-28 05:16:52.364711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:23.114 [2024-11-28 05:16:52.364719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:23.114 [2024-11-28 05:16:52.364737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:23.114 [2024-11-28 05:16:52.364745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:23.114 [2024-11-28 05:16:52.364762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:23.114 [2024-11-28 05:16:52.364777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:23.114 [2024-11-28 05:16:52.364785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:23.114 [2024-11-28 05:16:52.364805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:23.114 [2024-11-28 05:16:52.364812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:23.114 [2024-11-28 05:16:52.364827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:23.114 [2024-11-28 05:16:52.364835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:23.114 [2024-11-28 05:16:52.364851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:23.114 [2024-11-28 05:16:52.364858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:23.114 [2024-11-28 05:16:52.364873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:23.114 [2024-11-28 05:16:52.364881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:23.114 [2024-11-28 05:16:52.364889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:23.114 [2024-11-28 05:16:52.364896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:23.114 [2024-11-28 05:16:52.364904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:23.114 [2024-11-28 05:16:52.364912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:23.114 [2024-11-28 05:16:52.364927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:23.114 [2024-11-28 05:16:52.364934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364940] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:23.114 [2024-11-28 05:16:52.364950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:23.114 [2024-11-28 05:16:52.364957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:23.114 [2024-11-28 05:16:52.364964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:23.114 [2024-11-28 05:16:52.364975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:23.114 [2024-11-28 05:16:52.364981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:23.114 [2024-11-28 05:16:52.364989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:23.114 [2024-11-28 05:16:52.364996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:23.114 [2024-11-28 05:16:52.365004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:23.114 [2024-11-28 05:16:52.365011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:23.115 [2024-11-28 05:16:52.365019] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:23.115 [2024-11-28 05:16:52.365028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:23.115 [2024-11-28 05:16:52.365037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:23.115 [2024-11-28 05:16:52.365044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:23.115 [2024-11-28 05:16:52.365054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:23.115 [2024-11-28 05:16:52.365061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:23.115 [2024-11-28 05:16:52.365068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:23.115 [2024-11-28 05:16:52.365074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:23.115 [2024-11-28 05:16:52.365081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:23.115 [2024-11-28 05:16:52.365088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:23.115 [2024-11-28 05:16:52.365096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:23.115 [2024-11-28 05:16:52.365109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:23.115 [2024-11-28 05:16:52.365116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:23.115 [2024-11-28 05:16:52.365122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:23.115 [2024-11-28 05:16:52.365130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:23.115 [2024-11-28 05:16:52.365138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:23.115 [2024-11-28 05:16:52.365145] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:23.115 [2024-11-28 05:16:52.365153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:23.115 [2024-11-28 05:16:52.365162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:23.115 [2024-11-28 05:16:52.365170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:23.115 [2024-11-28 05:16:52.365194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:23.115 [2024-11-28 05:16:52.365202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:23.115 [2024-11-28 05:16:52.365210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.115 [2024-11-28 05:16:52.365218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:23.115 [2024-11-28 05:16:52.365225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:27:23.115 [2024-11-28 05:16:52.365235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.115 [2024-11-28 05:16:52.379440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.115 [2024-11-28 05:16:52.379477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:23.115 [2024-11-28 05:16:52.379489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.156 ms 00:27:23.115 [2024-11-28 05:16:52.379498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.115 [2024-11-28 05:16:52.379588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.115 [2024-11-28 05:16:52.379599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:23.115 [2024-11-28 05:16:52.379609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:23.115 [2024-11-28 05:16:52.379618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.399606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.399664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:23.377 [2024-11-28 05:16:52.399681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.929 ms 00:27:23.377 [2024-11-28 05:16:52.399692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.399751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.399765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:23.377 [2024-11-28 05:16:52.399784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:23.377 [2024-11-28 05:16:52.399794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.400461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.400501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:23.377 [2024-11-28 05:16:52.400525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:27:23.377 [2024-11-28 05:16:52.400537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.400733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.400748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:23.377 [2024-11-28 05:16:52.400766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:27:23.377 [2024-11-28 05:16:52.400778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.409123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.409160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:23.377 [2024-11-28 05:16:52.409171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.312 ms 00:27:23.377 [2024-11-28 05:16:52.409195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.412801] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:23.377 [2024-11-28 05:16:52.412844] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:23.377 [2024-11-28 05:16:52.412860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.412869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:23.377 [2024-11-28 05:16:52.412878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.570 ms 00:27:23.377 [2024-11-28 05:16:52.412886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.428584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.428627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:23.377 [2024-11-28 05:16:52.428639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.645 ms 00:27:23.377 [2024-11-28 05:16:52.428655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.431288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.431323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:23.377 [2024-11-28 05:16:52.431333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.576 ms 00:27:23.377 [2024-11-28 05:16:52.431340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.434259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.434305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:23.377 [2024-11-28 05:16:52.434326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.870 ms 00:27:23.377 [2024-11-28 05:16:52.434335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.434688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.434710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:23.377 [2024-11-28 05:16:52.434720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:27:23.377 [2024-11-28 05:16:52.434729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.458304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.458362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:23.377 [2024-11-28 05:16:52.458377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.549 ms 00:27:23.377 [2024-11-28 05:16:52.458386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.466669] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:23.377 [2024-11-28 05:16:52.469812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.469857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:23.377 [2024-11-28 05:16:52.469869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.372 ms 00:27:23.377 [2024-11-28 05:16:52.469878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.377 [2024-11-28 05:16:52.469958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.377 [2024-11-28 05:16:52.469970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:23.377 [2024-11-28 05:16:52.469986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:23.378 [2024-11-28 05:16:52.469995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.378 [2024-11-28 05:16:52.470841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.378 [2024-11-28 05:16:52.470878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:23.378 [2024-11-28 05:16:52.470889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:27:23.378 [2024-11-28 05:16:52.470897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.378 [2024-11-28 05:16:52.470925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.378 [2024-11-28 05:16:52.470934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:23.378 [2024-11-28 05:16:52.470943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:23.378 [2024-11-28 05:16:52.470951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.378 [2024-11-28 05:16:52.470995] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:23.378 [2024-11-28 05:16:52.471006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.378 [2024-11-28 05:16:52.471018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:23.378 [2024-11-28 05:16:52.471032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:23.378 [2024-11-28 05:16:52.471040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.378 [2024-11-28 05:16:52.476719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.378 [2024-11-28 05:16:52.476759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:23.378 [2024-11-28 05:16:52.476779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.658 ms 00:27:23.378 [2024-11-28 05:16:52.476791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.378 [2024-11-28 05:16:52.476879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.378 [2024-11-28 05:16:52.476893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:23.378 [2024-11-28 05:16:52.476905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:23.378 [2024-11-28 05:16:52.476921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.378 [2024-11-28 05:16:52.478110] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.527 ms, result 0 00:27:24.765  [2024-11-28T05:16:54.992Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-28T05:16:55.933Z] Copying: 32/1024 [MB] (19 MBps) [2024-11-28T05:16:56.875Z] Copying: 50/1024 [MB] (18 MBps) [2024-11-28T05:16:57.817Z] Copying: 68/1024 [MB] (18 MBps) [2024-11-28T05:16:58.758Z] Copying: 93/1024 [MB] (24 MBps) [2024-11-28T05:16:59.699Z] Copying: 111/1024 [MB] (18 MBps) [2024-11-28T05:17:01.111Z] Copying: 127/1024 [MB] (15 MBps) [2024-11-28T05:17:01.712Z] Copying: 142/1024 [MB] (14 MBps) [2024-11-28T05:17:02.654Z] Copying: 161/1024 [MB] (19 MBps) [2024-11-28T05:17:04.038Z] Copying: 180/1024 [MB] (19 MBps) [2024-11-28T05:17:04.982Z] Copying: 197/1024 [MB] (16 MBps) [2024-11-28T05:17:05.926Z] Copying: 207/1024 [MB] (10 MBps) [2024-11-28T05:17:06.872Z] Copying: 217/1024 [MB] (10 MBps) [2024-11-28T05:17:07.818Z] Copying: 235/1024 [MB] (17 MBps) [2024-11-28T05:17:08.764Z] Copying: 246/1024 [MB] (11 MBps) [2024-11-28T05:17:09.709Z] Copying: 257/1024 [MB] (10 MBps) [2024-11-28T05:17:10.656Z] Copying: 273/1024 [MB] (16 MBps) [2024-11-28T05:17:12.043Z] Copying: 289/1024 [MB] (15 MBps) [2024-11-28T05:17:12.986Z] Copying: 309/1024 [MB] (19 MBps) [2024-11-28T05:17:13.927Z] Copying: 329/1024 [MB] (20 MBps) [2024-11-28T05:17:14.865Z] Copying: 347/1024 [MB] (18 MBps) [2024-11-28T05:17:15.806Z] Copying: 372/1024 [MB] (25 MBps) [2024-11-28T05:17:16.745Z] Copying: 388/1024 [MB] (15 MBps) [2024-11-28T05:17:17.687Z] Copying: 412/1024 [MB] (23 MBps) [2024-11-28T05:17:19.074Z] Copying: 426/1024 [MB] (14 MBps) [2024-11-28T05:17:20.016Z] Copying: 444/1024 [MB] (18 MBps) [2024-11-28T05:17:20.957Z] Copying: 462/1024 [MB] (17 MBps) [2024-11-28T05:17:21.901Z] Copying: 478/1024 [MB] (16 MBps) [2024-11-28T05:17:22.844Z] Copying: 492/1024 [MB] (14 MBps) [2024-11-28T05:17:23.788Z] Copying: 505/1024 [MB] (12 MBps) [2024-11-28T05:17:24.730Z] Copying: 515/1024 [MB] (10 MBps) [2024-11-28T05:17:25.674Z] Copying: 526/1024 [MB] (10 MBps) [2024-11-28T05:17:27.055Z] Copying: 537/1024 [MB] (10 MBps) [2024-11-28T05:17:27.993Z] Copying: 548/1024 [MB] (10 MBps) [2024-11-28T05:17:28.935Z] Copying: 558/1024 [MB] (10 MBps) [2024-11-28T05:17:29.878Z] Copying: 569/1024 [MB] (10 MBps) [2024-11-28T05:17:30.821Z] Copying: 586/1024 [MB] (16 MBps) [2024-11-28T05:17:31.765Z] Copying: 602/1024 [MB] (16 MBps) [2024-11-28T05:17:32.762Z] Copying: 619/1024 [MB] (16 MBps) [2024-11-28T05:17:33.703Z] Copying: 639/1024 [MB] (19 MBps) [2024-11-28T05:17:35.086Z] Copying: 656/1024 [MB] (17 MBps) [2024-11-28T05:17:35.656Z] Copying: 674/1024 [MB] (18 MBps) [2024-11-28T05:17:37.040Z] Copying: 692/1024 [MB] (17 MBps) [2024-11-28T05:17:37.982Z] Copying: 713/1024 [MB] (21 MBps) [2024-11-28T05:17:38.924Z] Copying: 732/1024 [MB] (19 MBps) [2024-11-28T05:17:39.866Z] Copying: 754/1024 [MB] (21 MBps) [2024-11-28T05:17:40.808Z] Copying: 777/1024 [MB] (23 MBps) [2024-11-28T05:17:41.751Z] Copying: 788/1024 [MB] (10 MBps) [2024-11-28T05:17:42.695Z] Copying: 800/1024 [MB] (12 MBps) [2024-11-28T05:17:44.082Z] Copying: 811/1024 [MB] (10 MBps) [2024-11-28T05:17:44.655Z] Copying: 821/1024 [MB] (10 MBps) [2024-11-28T05:17:46.048Z] Copying: 832/1024 [MB] (10 MBps) [2024-11-28T05:17:46.990Z] Copying: 848/1024 [MB] (15 MBps) [2024-11-28T05:17:47.932Z] Copying: 863/1024 [MB] (15 MBps) [2024-11-28T05:17:48.872Z] Copying: 887/1024 [MB] (23 MBps) [2024-11-28T05:17:49.815Z] Copying: 901/1024 [MB] (13 MBps) [2024-11-28T05:17:50.757Z] Copying: 920/1024 [MB] (19 MBps) [2024-11-28T05:17:51.699Z] Copying: 934/1024 [MB] (13 MBps) [2024-11-28T05:17:53.085Z] Copying: 951/1024 [MB] (16 MBps) [2024-11-28T05:17:53.656Z] Copying: 966/1024 [MB] (15 MBps) [2024-11-28T05:17:55.042Z] Copying: 983/1024 [MB] (16 MBps) [2024-11-28T05:17:55.989Z] Copying: 1000/1024 [MB] (17 MBps) [2024-11-28T05:17:55.989Z] Copying: 1019/1024 [MB] (19 MBps) [2024-11-28T05:17:56.932Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-28 05:17:56.796080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.648 [2024-11-28 05:17:56.796221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:27.648 [2024-11-28 05:17:56.796260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:27.648 [2024-11-28 05:17:56.796280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.648 [2024-11-28 05:17:56.796329] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:27.648 [2024-11-28 05:17:56.797322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.648 [2024-11-28 05:17:56.797377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:27.648 [2024-11-28 05:17:56.797398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:28:27.648 [2024-11-28 05:17:56.797414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.648 [2024-11-28 05:17:56.798031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.648 [2024-11-28 05:17:56.798075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:27.648 [2024-11-28 05:17:56.798095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:28:27.648 [2024-11-28 05:17:56.798122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.648 [2024-11-28 05:17:56.803937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.648 [2024-11-28 05:17:56.803978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:27.648 [2024-11-28 05:17:56.803990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.784 ms 00:28:27.648 [2024-11-28 05:17:56.803998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.648 [2024-11-28 05:17:56.811044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.648 [2024-11-28 05:17:56.811094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:27.649 [2024-11-28 05:17:56.811106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.022 ms 00:28:27.649 [2024-11-28 05:17:56.811115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.649 [2024-11-28 05:17:56.814088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.649 [2024-11-28 05:17:56.814171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:27.649 [2024-11-28 05:17:56.814204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.868 ms 00:28:27.649 [2024-11-28 05:17:56.814212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.649 [2024-11-28 05:17:56.819815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.649 [2024-11-28 05:17:56.819878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:27.649 [2024-11-28 05:17:56.819890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.546 ms 00:28:27.649 [2024-11-28 05:17:56.819911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.649 [2024-11-28 05:17:56.824827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.649 [2024-11-28 05:17:56.824895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:27.649 [2024-11-28 05:17:56.824907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.853 ms 00:28:27.649 [2024-11-28 05:17:56.824921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.649 [2024-11-28 05:17:56.828389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.649 [2024-11-28 05:17:56.828443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:27.649 [2024-11-28 05:17:56.828454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.448 ms 00:28:27.649 [2024-11-28 05:17:56.828463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.649 [2024-11-28 05:17:56.831880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.649 [2024-11-28 05:17:56.831938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:27.649 [2024-11-28 05:17:56.831948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:28:27.649 [2024-11-28 05:17:56.831955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.649 [2024-11-28 05:17:56.834599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.649 [2024-11-28 05:17:56.834653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:27.649 [2024-11-28 05:17:56.834663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:28:27.649 [2024-11-28 05:17:56.834671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.649 [2024-11-28 05:17:56.836977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.649 [2024-11-28 05:17:56.837034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:27.649 [2024-11-28 05:17:56.837046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.227 ms 00:28:27.649 [2024-11-28 05:17:56.837054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.649 [2024-11-28 05:17:56.837097] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:27.649 [2024-11-28 05:17:56.837113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:27.649 [2024-11-28 05:17:56.837124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:27.649 [2024-11-28 05:17:56.837134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:27.649 [2024-11-28 05:17:56.837794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.837991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:27.650 [2024-11-28 05:17:56.838121] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:27.650 [2024-11-28 05:17:56.838129] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8895fcb2-4145-407f-a2a5-484f995f7d99 00:28:27.650 [2024-11-28 05:17:56.838137] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:27.650 [2024-11-28 05:17:56.838146] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:27.650 [2024-11-28 05:17:56.838153] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:27.650 [2024-11-28 05:17:56.838190] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:27.650 [2024-11-28 05:17:56.838208] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:27.650 [2024-11-28 05:17:56.838217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:27.650 [2024-11-28 05:17:56.838230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:27.650 [2024-11-28 05:17:56.838237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:27.650 [2024-11-28 05:17:56.838245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:27.650 [2024-11-28 05:17:56.838253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.650 [2024-11-28 05:17:56.838269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:27.650 [2024-11-28 05:17:56.838279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:28:27.650 [2024-11-28 05:17:56.838289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.840711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.650 [2024-11-28 05:17:56.840752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:27.650 [2024-11-28 05:17:56.840765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.397 ms 00:28:27.650 [2024-11-28 05:17:56.840774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.840905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.650 [2024-11-28 05:17:56.840914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:27.650 [2024-11-28 05:17:56.840923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:27.650 [2024-11-28 05:17:56.840931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.849056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.650 [2024-11-28 05:17:56.849112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:27.650 [2024-11-28 05:17:56.849124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.650 [2024-11-28 05:17:56.849136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.849212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.650 [2024-11-28 05:17:56.849221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:27.650 [2024-11-28 05:17:56.849230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.650 [2024-11-28 05:17:56.849238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.849317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.650 [2024-11-28 05:17:56.849329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:27.650 [2024-11-28 05:17:56.849337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.650 [2024-11-28 05:17:56.849345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.849366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.650 [2024-11-28 05:17:56.849375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:27.650 [2024-11-28 05:17:56.849384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.650 [2024-11-28 05:17:56.849392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.863249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.650 [2024-11-28 05:17:56.863300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:27.650 [2024-11-28 05:17:56.863316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.650 [2024-11-28 05:17:56.863328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.873454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.650 [2024-11-28 05:17:56.873504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:27.650 [2024-11-28 05:17:56.873515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.650 [2024-11-28 05:17:56.873523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.873593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.650 [2024-11-28 05:17:56.873605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:27.650 [2024-11-28 05:17:56.873614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.650 [2024-11-28 05:17:56.873622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.873668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.650 [2024-11-28 05:17:56.873683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:27.650 [2024-11-28 05:17:56.873692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.650 [2024-11-28 05:17:56.873699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.873778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.650 [2024-11-28 05:17:56.873789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:27.650 [2024-11-28 05:17:56.873797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.650 [2024-11-28 05:17:56.873805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.873834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.650 [2024-11-28 05:17:56.873847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:27.650 [2024-11-28 05:17:56.873854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.650 [2024-11-28 05:17:56.873862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.650 [2024-11-28 05:17:56.873902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.650 [2024-11-28 05:17:56.873912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:27.650 [2024-11-28 05:17:56.873920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.651 [2024-11-28 05:17:56.873928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.651 [2024-11-28 05:17:56.873976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.651 [2024-11-28 05:17:56.874073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:27.651 [2024-11-28 05:17:56.874083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.651 [2024-11-28 05:17:56.874099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.651 [2024-11-28 05:17:56.874253] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 78.144 ms, result 0 00:28:27.911 00:28:27.911 00:28:27.911 05:17:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:30.456 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:30.456 05:17:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:30.456 05:17:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:30.457 05:17:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:30.457 05:17:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:30.457 05:17:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:30.457 05:17:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:30.457 05:17:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:30.457 05:17:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 90958 00:28:30.457 05:17:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 90958 ']' 00:28:30.457 05:17:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 90958 00:28:30.457 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90958) - No such process 00:28:30.457 Process with pid 90958 is not found 00:28:30.457 05:17:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 90958 is not found' 00:28:30.457 05:17:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:30.716 Remove shared memory files 00:28:30.716 05:17:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:30.716 05:17:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:30.716 05:17:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:30.716 05:17:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:30.716 05:17:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:30.716 05:17:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:30.716 05:17:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:30.716 ************************************ 00:28:30.716 END TEST ftl_dirty_shutdown 00:28:30.716 ************************************ 00:28:30.716 00:28:30.716 real 4m19.730s 00:28:30.716 user 4m30.450s 00:28:30.716 sys 0m23.893s 00:28:30.716 05:17:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.716 05:17:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:30.716 05:17:59 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:30.716 05:17:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:30.716 05:17:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.716 05:17:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:30.716 ************************************ 00:28:30.716 START TEST ftl_upgrade_shutdown 00:28:30.716 ************************************ 00:28:30.716 05:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:30.716 * Looking for test storage... 00:28:30.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:30.716 05:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:30.716 05:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:30.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.717 --rc genhtml_branch_coverage=1 00:28:30.717 --rc genhtml_function_coverage=1 00:28:30.717 --rc genhtml_legend=1 00:28:30.717 --rc geninfo_all_blocks=1 00:28:30.717 --rc geninfo_unexecuted_blocks=1 00:28:30.717 00:28:30.717 ' 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:30.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.717 --rc genhtml_branch_coverage=1 00:28:30.717 --rc genhtml_function_coverage=1 00:28:30.717 --rc genhtml_legend=1 00:28:30.717 --rc geninfo_all_blocks=1 00:28:30.717 --rc geninfo_unexecuted_blocks=1 00:28:30.717 00:28:30.717 ' 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:30.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.717 --rc genhtml_branch_coverage=1 00:28:30.717 --rc genhtml_function_coverage=1 00:28:30.717 --rc genhtml_legend=1 00:28:30.717 --rc geninfo_all_blocks=1 00:28:30.717 --rc geninfo_unexecuted_blocks=1 00:28:30.717 00:28:30.717 ' 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:30.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.717 --rc genhtml_branch_coverage=1 00:28:30.717 --rc genhtml_function_coverage=1 00:28:30.717 --rc genhtml_legend=1 00:28:30.717 --rc geninfo_all_blocks=1 00:28:30.717 --rc geninfo_unexecuted_blocks=1 00:28:30.717 00:28:30.717 ' 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:30.717 05:17:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:30.978 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93761 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93761 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93761 ']' 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.979 05:18:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:30.979 [2024-11-28 05:18:00.074257] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:30.979 [2024-11-28 05:18:00.074380] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93761 ] 00:28:30.979 [2024-11-28 05:18:00.220156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.979 [2024-11-28 05:18:00.249387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:31.919 05:18:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:31.919 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:31.919 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:31.919 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:31.919 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:28:31.919 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:31.919 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:31.919 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:31.919 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:32.180 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:32.180 { 00:28:32.180 "name": "basen1", 00:28:32.180 "aliases": [ 00:28:32.180 "5ffcae2f-97fd-471f-a43b-3ba062ef1720" 00:28:32.180 ], 00:28:32.180 "product_name": "NVMe disk", 00:28:32.180 "block_size": 4096, 00:28:32.180 "num_blocks": 1310720, 00:28:32.180 "uuid": "5ffcae2f-97fd-471f-a43b-3ba062ef1720", 00:28:32.180 "numa_id": -1, 00:28:32.180 "assigned_rate_limits": { 00:28:32.180 "rw_ios_per_sec": 0, 00:28:32.180 "rw_mbytes_per_sec": 0, 00:28:32.180 "r_mbytes_per_sec": 0, 00:28:32.180 "w_mbytes_per_sec": 0 00:28:32.180 }, 00:28:32.180 "claimed": true, 00:28:32.180 "claim_type": "read_many_write_one", 00:28:32.180 "zoned": false, 00:28:32.180 "supported_io_types": { 00:28:32.180 "read": true, 00:28:32.180 "write": true, 00:28:32.180 "unmap": true, 00:28:32.180 "flush": true, 00:28:32.180 "reset": true, 00:28:32.180 "nvme_admin": true, 00:28:32.180 "nvme_io": true, 00:28:32.180 "nvme_io_md": false, 00:28:32.180 "write_zeroes": true, 00:28:32.180 "zcopy": false, 00:28:32.180 "get_zone_info": false, 00:28:32.180 "zone_management": false, 00:28:32.180 "zone_append": false, 00:28:32.180 "compare": true, 00:28:32.180 "compare_and_write": false, 00:28:32.180 "abort": true, 00:28:32.180 "seek_hole": false, 00:28:32.180 "seek_data": false, 00:28:32.180 "copy": true, 00:28:32.180 "nvme_iov_md": false 00:28:32.180 }, 00:28:32.180 "driver_specific": { 00:28:32.180 "nvme": [ 00:28:32.180 { 00:28:32.180 "pci_address": "0000:00:11.0", 00:28:32.180 "trid": { 00:28:32.180 "trtype": "PCIe", 00:28:32.180 "traddr": "0000:00:11.0" 00:28:32.180 }, 00:28:32.180 "ctrlr_data": { 00:28:32.180 "cntlid": 0, 00:28:32.180 "vendor_id": "0x1b36", 00:28:32.180 "model_number": "QEMU NVMe Ctrl", 00:28:32.180 "serial_number": "12341", 00:28:32.180 "firmware_revision": "8.0.0", 00:28:32.180 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:32.180 "oacs": { 00:28:32.180 "security": 0, 00:28:32.180 "format": 1, 00:28:32.180 "firmware": 0, 00:28:32.180 "ns_manage": 1 00:28:32.180 }, 00:28:32.180 "multi_ctrlr": false, 00:28:32.180 "ana_reporting": false 00:28:32.180 }, 00:28:32.180 "vs": { 00:28:32.180 "nvme_version": "1.4" 00:28:32.180 }, 00:28:32.180 "ns_data": { 00:28:32.180 "id": 1, 00:28:32.180 "can_share": false 00:28:32.180 } 00:28:32.180 } 00:28:32.180 ], 00:28:32.180 "mp_policy": "active_passive" 00:28:32.180 } 00:28:32.180 } 00:28:32.180 ]' 00:28:32.180 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:32.180 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:32.180 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:32.180 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:32.180 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:32.180 05:18:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:32.180 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:32.181 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:32.181 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:32.181 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:32.181 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:32.441 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=8a5325fe-a140-44e6-a7eb-71e15ec38500 00:28:32.441 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:32.441 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a5325fe-a140-44e6-a7eb-71e15ec38500 00:28:32.700 05:18:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:32.959 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=3d3c9b80-d007-453c-9fe0-f89de1e5e527 00:28:32.959 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 3d3c9b80-d007-453c-9fe0-f89de1e5e527 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=81cca2ad-c46c-4ddd-bb65-e09d67b1dfdc 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 81cca2ad-c46c-4ddd-bb65-e09d67b1dfdc ]] 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 81cca2ad-c46c-4ddd-bb65-e09d67b1dfdc 5120 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=81cca2ad-c46c-4ddd-bb65-e09d67b1dfdc 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 81cca2ad-c46c-4ddd-bb65-e09d67b1dfdc 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=81cca2ad-c46c-4ddd-bb65-e09d67b1dfdc 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81cca2ad-c46c-4ddd-bb65-e09d67b1dfdc 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:33.219 { 00:28:33.219 "name": "81cca2ad-c46c-4ddd-bb65-e09d67b1dfdc", 00:28:33.219 "aliases": [ 00:28:33.219 "lvs/basen1p0" 00:28:33.219 ], 00:28:33.219 "product_name": "Logical Volume", 00:28:33.219 "block_size": 4096, 00:28:33.219 "num_blocks": 5242880, 00:28:33.219 "uuid": "81cca2ad-c46c-4ddd-bb65-e09d67b1dfdc", 00:28:33.219 "assigned_rate_limits": { 00:28:33.219 "rw_ios_per_sec": 0, 00:28:33.219 "rw_mbytes_per_sec": 0, 00:28:33.219 "r_mbytes_per_sec": 0, 00:28:33.219 "w_mbytes_per_sec": 0 00:28:33.219 }, 00:28:33.219 "claimed": false, 00:28:33.219 "zoned": false, 00:28:33.219 "supported_io_types": { 00:28:33.219 "read": true, 00:28:33.219 "write": true, 00:28:33.219 "unmap": true, 00:28:33.219 "flush": false, 00:28:33.219 "reset": true, 00:28:33.219 "nvme_admin": false, 00:28:33.219 "nvme_io": false, 00:28:33.219 "nvme_io_md": false, 00:28:33.219 "write_zeroes": true, 00:28:33.219 "zcopy": false, 00:28:33.219 "get_zone_info": false, 00:28:33.219 "zone_management": false, 00:28:33.219 "zone_append": false, 00:28:33.219 "compare": false, 00:28:33.219 "compare_and_write": false, 00:28:33.219 "abort": false, 00:28:33.219 "seek_hole": true, 00:28:33.219 "seek_data": true, 00:28:33.219 "copy": false, 00:28:33.219 "nvme_iov_md": false 00:28:33.219 }, 00:28:33.219 "driver_specific": { 00:28:33.219 "lvol": { 00:28:33.219 "lvol_store_uuid": "3d3c9b80-d007-453c-9fe0-f89de1e5e527", 00:28:33.219 "base_bdev": "basen1", 00:28:33.219 "thin_provision": true, 00:28:33.219 "num_allocated_clusters": 0, 00:28:33.219 "snapshot": false, 00:28:33.219 "clone": false, 00:28:33.219 "esnap_clone": false 00:28:33.219 } 00:28:33.219 } 00:28:33.219 } 00:28:33.219 ]' 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:33.219 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:33.480 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:33.480 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:33.480 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:33.739 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:33.739 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:33.739 05:18:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 81cca2ad-c46c-4ddd-bb65-e09d67b1dfdc -c cachen1p0 --l2p_dram_limit 2 00:28:33.999 [2024-11-28 05:18:03.166369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.999 [2024-11-28 05:18:03.166420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:33.999 [2024-11-28 05:18:03.166433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:33.999 [2024-11-28 05:18:03.166441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.999 [2024-11-28 05:18:03.166497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.999 [2024-11-28 05:18:03.166509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:33.999 [2024-11-28 05:18:03.166516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:33.999 [2024-11-28 05:18:03.166546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.999 [2024-11-28 05:18:03.166563] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:33.999 [2024-11-28 05:18:03.166805] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:33.999 [2024-11-28 05:18:03.166818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.999 [2024-11-28 05:18:03.166829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:33.999 [2024-11-28 05:18:03.166836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:28:33.999 [2024-11-28 05:18:03.166844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.999 [2024-11-28 05:18:03.166871] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 79b7e35d-cd40-48e3-8543-1201adf9e9ba 00:28:33.999 [2024-11-28 05:18:03.168338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.999 [2024-11-28 05:18:03.168379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:33.999 [2024-11-28 05:18:03.168390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:33.999 [2024-11-28 05:18:03.168396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.999 [2024-11-28 05:18:03.174999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.999 [2024-11-28 05:18:03.175035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:33.999 [2024-11-28 05:18:03.175049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.538 ms 00:28:33.999 [2024-11-28 05:18:03.175055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.999 [2024-11-28 05:18:03.175099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.999 [2024-11-28 05:18:03.175106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:33.999 [2024-11-28 05:18:03.175115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:33.999 [2024-11-28 05:18:03.175121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.999 [2024-11-28 05:18:03.175166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.999 [2024-11-28 05:18:03.175174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:33.999 [2024-11-28 05:18:03.175195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:33.999 [2024-11-28 05:18:03.175201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.999 [2024-11-28 05:18:03.175221] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:33.999 [2024-11-28 05:18:03.176868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.999 [2024-11-28 05:18:03.176903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:33.999 [2024-11-28 05:18:03.176911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.653 ms 00:28:33.999 [2024-11-28 05:18:03.176919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.999 [2024-11-28 05:18:03.176942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.999 [2024-11-28 05:18:03.176950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:33.999 [2024-11-28 05:18:03.176957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:33.999 [2024-11-28 05:18:03.176967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.999 [2024-11-28 05:18:03.176986] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:33.999 [2024-11-28 05:18:03.177107] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:33.999 [2024-11-28 05:18:03.177117] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:33.999 [2024-11-28 05:18:03.177127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:33.999 [2024-11-28 05:18:03.177135] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:33.999 [2024-11-28 05:18:03.177147] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:33.999 [2024-11-28 05:18:03.177153] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:33.999 [2024-11-28 05:18:03.177165] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:33.999 [2024-11-28 05:18:03.177171] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:33.999 [2024-11-28 05:18:03.177212] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:33.999 [2024-11-28 05:18:03.177221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.999 [2024-11-28 05:18:03.177229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:33.999 [2024-11-28 05:18:03.177240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.236 ms 00:28:33.999 [2024-11-28 05:18:03.177251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.999 [2024-11-28 05:18:03.177316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.999 [2024-11-28 05:18:03.177327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:33.999 [2024-11-28 05:18:03.177333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:28:33.999 [2024-11-28 05:18:03.177342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.999 [2024-11-28 05:18:03.177418] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:33.999 [2024-11-28 05:18:03.177427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:33.999 [2024-11-28 05:18:03.177433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:33.999 [2024-11-28 05:18:03.177443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:33.999 [2024-11-28 05:18:03.177453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:33.999 [2024-11-28 05:18:03.177460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:33.999 [2024-11-28 05:18:03.177465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:33.999 [2024-11-28 05:18:03.177473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:33.999 [2024-11-28 05:18:03.177478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:33.999 [2024-11-28 05:18:03.177485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:33.999 [2024-11-28 05:18:03.177491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:33.999 [2024-11-28 05:18:03.177497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:33.999 [2024-11-28 05:18:03.177502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:33.999 [2024-11-28 05:18:03.177511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:33.999 [2024-11-28 05:18:03.177519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:33.999 [2024-11-28 05:18:03.177527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:33.999 [2024-11-28 05:18:03.177533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:33.999 [2024-11-28 05:18:03.177542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:33.999 [2024-11-28 05:18:03.177548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:33.999 [2024-11-28 05:18:03.177567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:33.999 [2024-11-28 05:18:03.177574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:33.999 [2024-11-28 05:18:03.177581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:33.999 [2024-11-28 05:18:03.177587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:33.999 [2024-11-28 05:18:03.177594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:33.999 [2024-11-28 05:18:03.177600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:33.999 [2024-11-28 05:18:03.177608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:33.999 [2024-11-28 05:18:03.177614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:33.999 [2024-11-28 05:18:03.177621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:33.999 [2024-11-28 05:18:03.177627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:33.999 [2024-11-28 05:18:03.177637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:33.999 [2024-11-28 05:18:03.177643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:33.999 [2024-11-28 05:18:03.177651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:33.999 [2024-11-28 05:18:03.177656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:33.999 [2024-11-28 05:18:03.177664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:33.999 [2024-11-28 05:18:03.177671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:33.999 [2024-11-28 05:18:03.177679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:33.999 [2024-11-28 05:18:03.177684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:33.999 [2024-11-28 05:18:03.177692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:34.000 [2024-11-28 05:18:03.177698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:34.000 [2024-11-28 05:18:03.177706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.000 [2024-11-28 05:18:03.177711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:34.000 [2024-11-28 05:18:03.177718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:34.000 [2024-11-28 05:18:03.177724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.000 [2024-11-28 05:18:03.177731] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:34.000 [2024-11-28 05:18:03.177738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:34.000 [2024-11-28 05:18:03.177748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:34.000 [2024-11-28 05:18:03.177758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.000 [2024-11-28 05:18:03.177768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:34.000 [2024-11-28 05:18:03.177779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:34.000 [2024-11-28 05:18:03.177787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:34.000 [2024-11-28 05:18:03.177794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:34.000 [2024-11-28 05:18:03.177801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:34.000 [2024-11-28 05:18:03.177806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:34.000 [2024-11-28 05:18:03.177817] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:34.000 [2024-11-28 05:18:03.177827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:34.000 [2024-11-28 05:18:03.177837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:34.000 [2024-11-28 05:18:03.177849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:34.000 [2024-11-28 05:18:03.177858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:34.000 [2024-11-28 05:18:03.177864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:34.000 [2024-11-28 05:18:03.177871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:34.000 [2024-11-28 05:18:03.177878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:34.000 [2024-11-28 05:18:03.177889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:34.000 [2024-11-28 05:18:03.177896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:34.000 [2024-11-28 05:18:03.177904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:34.000 [2024-11-28 05:18:03.177910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:34.000 [2024-11-28 05:18:03.177918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:34.000 [2024-11-28 05:18:03.177925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:34.000 [2024-11-28 05:18:03.177933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:34.000 [2024-11-28 05:18:03.177938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:34.000 [2024-11-28 05:18:03.177945] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:34.000 [2024-11-28 05:18:03.177951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:34.000 [2024-11-28 05:18:03.177959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:34.000 [2024-11-28 05:18:03.177965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:34.000 [2024-11-28 05:18:03.177973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:34.000 [2024-11-28 05:18:03.177978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:34.000 [2024-11-28 05:18:03.177985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.000 [2024-11-28 05:18:03.177991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:34.000 [2024-11-28 05:18:03.178000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.617 ms 00:28:34.000 [2024-11-28 05:18:03.178006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.000 [2024-11-28 05:18:03.178038] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:34.000 [2024-11-28 05:18:03.178046] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:36.636 [2024-11-28 05:18:05.654415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.636 [2024-11-28 05:18:05.654476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:36.636 [2024-11-28 05:18:05.654492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2476.363 ms 00:28:36.636 [2024-11-28 05:18:05.654501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.636 [2024-11-28 05:18:05.662673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.636 [2024-11-28 05:18:05.662715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:36.636 [2024-11-28 05:18:05.662728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.090 ms 00:28:36.636 [2024-11-28 05:18:05.662736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.636 [2024-11-28 05:18:05.662787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.636 [2024-11-28 05:18:05.662796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:36.636 [2024-11-28 05:18:05.662807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:36.636 [2024-11-28 05:18:05.662814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.636 [2024-11-28 05:18:05.671329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.636 [2024-11-28 05:18:05.671364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:36.636 [2024-11-28 05:18:05.671376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.470 ms 00:28:36.636 [2024-11-28 05:18:05.671386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.636 [2024-11-28 05:18:05.671414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.636 [2024-11-28 05:18:05.671422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:36.636 [2024-11-28 05:18:05.671433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:36.637 [2024-11-28 05:18:05.671440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.671775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.671794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:36.637 [2024-11-28 05:18:05.671804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:28:36.637 [2024-11-28 05:18:05.671814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.671858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.671866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:36.637 [2024-11-28 05:18:05.671876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:36.637 [2024-11-28 05:18:05.671887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.677394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.677427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:36.637 [2024-11-28 05:18:05.677439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.486 ms 00:28:36.637 [2024-11-28 05:18:05.677446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.694547] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:36.637 [2024-11-28 05:18:05.695555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.695606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:36.637 [2024-11-28 05:18:05.695626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.033 ms 00:28:36.637 [2024-11-28 05:18:05.695643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.707290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.707340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:36.637 [2024-11-28 05:18:05.707351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.604 ms 00:28:36.637 [2024-11-28 05:18:05.707366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.707447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.707462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:36.637 [2024-11-28 05:18:05.707471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:28:36.637 [2024-11-28 05:18:05.707481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.710366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.710406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:36.637 [2024-11-28 05:18:05.710420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.866 ms 00:28:36.637 [2024-11-28 05:18:05.710430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.713452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.713594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:36.637 [2024-11-28 05:18:05.713610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.987 ms 00:28:36.637 [2024-11-28 05:18:05.713619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.713919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.713932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:36.637 [2024-11-28 05:18:05.713941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:28:36.637 [2024-11-28 05:18:05.713955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.741820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.741859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:36.637 [2024-11-28 05:18:05.741872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.842 ms 00:28:36.637 [2024-11-28 05:18:05.741883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.745821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.745866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:36.637 [2024-11-28 05:18:05.745878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.877 ms 00:28:36.637 [2024-11-28 05:18:05.745889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.749917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.749954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:36.637 [2024-11-28 05:18:05.749963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.993 ms 00:28:36.637 [2024-11-28 05:18:05.749973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.754071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.754107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:36.637 [2024-11-28 05:18:05.754117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.065 ms 00:28:36.637 [2024-11-28 05:18:05.754128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.754167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.754194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:36.637 [2024-11-28 05:18:05.754203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:36.637 [2024-11-28 05:18:05.754218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.754293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.637 [2024-11-28 05:18:05.754305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:36.637 [2024-11-28 05:18:05.754313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:36.637 [2024-11-28 05:18:05.754324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.637 [2024-11-28 05:18:05.755215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2588.439 ms, result 0 00:28:36.637 { 00:28:36.637 "name": "ftl", 00:28:36.637 "uuid": "79b7e35d-cd40-48e3-8543-1201adf9e9ba" 00:28:36.637 } 00:28:36.637 05:18:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:36.637 [2024-11-28 05:18:05.910542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.898 05:18:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:36.898 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:37.160 [2024-11-28 05:18:06.315003] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:37.160 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:37.421 [2024-11-28 05:18:06.535429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:37.421 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:37.682 Fill FTL, iteration 1 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=93872 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 93872 /var/tmp/spdk.tgt.sock 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93872 ']' 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:37.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.682 05:18:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:37.943 [2024-11-28 05:18:06.971695] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:37.943 [2024-11-28 05:18:06.972021] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93872 ] 00:28:37.943 [2024-11-28 05:18:07.122643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.943 [2024-11-28 05:18:07.138937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.513 05:18:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.513 05:18:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:38.513 05:18:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:38.772 ftln1 00:28:38.772 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:38.772 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:39.031 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:39.031 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 93872 00:28:39.031 05:18:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93872 ']' 00:28:39.031 05:18:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93872 00:28:39.031 05:18:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:39.031 05:18:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.031 05:18:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93872 00:28:39.031 killing process with pid 93872 00:28:39.031 05:18:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:39.031 05:18:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:39.031 05:18:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93872' 00:28:39.031 05:18:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93872 00:28:39.031 05:18:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93872 00:28:39.292 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:39.292 05:18:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:39.292 [2024-11-28 05:18:08.517485] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:39.292 [2024-11-28 05:18:08.517782] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93897 ] 00:28:39.552 [2024-11-28 05:18:08.658776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.552 [2024-11-28 05:18:08.677889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.932  [2024-11-28T05:18:11.155Z] Copying: 261/1024 [MB] (261 MBps) [2024-11-28T05:18:12.092Z] Copying: 537/1024 [MB] (276 MBps) [2024-11-28T05:18:12.660Z] Copying: 811/1024 [MB] (274 MBps) [2024-11-28T05:18:12.919Z] Copying: 1024/1024 [MB] (average 272 MBps) 00:28:43.635 00:28:43.635 05:18:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:43.635 Calculate MD5 checksum, iteration 1 00:28:43.635 05:18:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:43.635 05:18:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:43.635 05:18:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:43.635 05:18:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:43.635 05:18:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:43.635 05:18:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:43.635 05:18:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:43.635 [2024-11-28 05:18:12.832319] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:43.635 [2024-11-28 05:18:12.832443] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93944 ] 00:28:43.894 [2024-11-28 05:18:12.973718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.894 [2024-11-28 05:18:12.990998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.273  [2024-11-28T05:18:14.815Z] Copying: 673/1024 [MB] (673 MBps) [2024-11-28T05:18:15.075Z] Copying: 1024/1024 [MB] (average 665 MBps) 00:28:45.791 00:28:45.791 05:18:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:45.791 05:18:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:47.696 05:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:47.696 Fill FTL, iteration 2 00:28:47.696 05:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5b8854f635275497221ee64dd81559b2 00:28:47.696 05:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:47.696 05:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:47.696 05:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:47.696 05:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:47.696 05:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:47.696 05:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:47.696 05:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:47.696 05:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:47.696 05:18:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:47.956 [2024-11-28 05:18:17.031920] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:47.956 [2024-11-28 05:18:17.032565] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93995 ] 00:28:47.956 [2024-11-28 05:18:17.178665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.956 [2024-11-28 05:18:17.197595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.334  [2024-11-28T05:18:19.556Z] Copying: 198/1024 [MB] (198 MBps) [2024-11-28T05:18:20.496Z] Copying: 418/1024 [MB] (220 MBps) [2024-11-28T05:18:21.435Z] Copying: 671/1024 [MB] (253 MBps) [2024-11-28T05:18:22.004Z] Copying: 928/1024 [MB] (257 MBps) [2024-11-28T05:18:22.004Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:28:52.720 00:28:52.720 Calculate MD5 checksum, iteration 2 00:28:52.720 05:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:52.720 05:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:52.720 05:18:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:52.720 05:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:52.720 05:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:52.720 05:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:52.720 05:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:52.720 05:18:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:52.980 [2024-11-28 05:18:22.033449] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:52.980 [2024-11-28 05:18:22.033557] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94047 ] 00:28:52.980 [2024-11-28 05:18:22.173984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.980 [2024-11-28 05:18:22.199451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.363  [2024-11-28T05:18:24.588Z] Copying: 558/1024 [MB] (558 MBps) [2024-11-28T05:18:27.880Z] Copying: 1024/1024 [MB] (average 593 MBps) 00:28:58.596 00:28:58.596 05:18:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:58.596 05:18:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:00.505 05:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:00.505 05:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c79bf9ce6f63922150d6aa2d25d6ed9d 00:29:00.505 05:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:00.505 05:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:00.505 05:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:00.505 [2024-11-28 05:18:29.653024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.505 [2024-11-28 05:18:29.653156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:00.505 [2024-11-28 05:18:29.653174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:00.505 [2024-11-28 05:18:29.653199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.505 [2024-11-28 05:18:29.653228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.505 [2024-11-28 05:18:29.653236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:00.505 [2024-11-28 05:18:29.653243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:00.505 [2024-11-28 05:18:29.653249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.505 [2024-11-28 05:18:29.653265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.505 [2024-11-28 05:18:29.653272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:00.505 [2024-11-28 05:18:29.653279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:00.506 [2024-11-28 05:18:29.653291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.506 [2024-11-28 05:18:29.653353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.312 ms, result 0 00:29:00.506 true 00:29:00.506 05:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:00.765 { 00:29:00.765 "name": "ftl", 00:29:00.765 "properties": [ 00:29:00.765 { 00:29:00.765 "name": "superblock_version", 00:29:00.765 "value": 5, 00:29:00.765 "read-only": true 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "name": "base_device", 00:29:00.765 "bands": [ 00:29:00.765 { 00:29:00.765 "id": 0, 00:29:00.765 "state": "FREE", 00:29:00.765 "validity": 0.0 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "id": 1, 00:29:00.765 "state": "FREE", 00:29:00.765 "validity": 0.0 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "id": 2, 00:29:00.765 "state": "FREE", 00:29:00.765 "validity": 0.0 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "id": 3, 00:29:00.765 "state": "FREE", 00:29:00.765 "validity": 0.0 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "id": 4, 00:29:00.765 "state": "FREE", 00:29:00.765 "validity": 0.0 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "id": 5, 00:29:00.765 "state": "FREE", 00:29:00.765 "validity": 0.0 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "id": 6, 00:29:00.765 "state": "FREE", 00:29:00.765 "validity": 0.0 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "id": 7, 00:29:00.765 "state": "FREE", 00:29:00.765 "validity": 0.0 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "id": 8, 00:29:00.765 "state": "FREE", 00:29:00.765 "validity": 0.0 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "id": 9, 00:29:00.765 "state": "FREE", 00:29:00.765 "validity": 0.0 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "id": 10, 00:29:00.765 "state": "FREE", 00:29:00.765 "validity": 0.0 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "id": 11, 00:29:00.765 "state": "FREE", 00:29:00.765 "validity": 0.0 00:29:00.765 }, 00:29:00.765 { 00:29:00.765 "id": 12, 00:29:00.765 "state": "FREE", 00:29:00.766 "validity": 0.0 00:29:00.766 }, 00:29:00.766 { 00:29:00.766 "id": 13, 00:29:00.766 "state": "FREE", 00:29:00.766 "validity": 0.0 00:29:00.766 }, 00:29:00.766 { 00:29:00.766 "id": 14, 00:29:00.766 "state": "FREE", 00:29:00.766 "validity": 0.0 00:29:00.766 }, 00:29:00.766 { 00:29:00.766 "id": 15, 00:29:00.766 "state": "FREE", 00:29:00.766 "validity": 0.0 00:29:00.766 }, 00:29:00.766 { 00:29:00.766 "id": 16, 00:29:00.766 "state": "FREE", 00:29:00.766 "validity": 0.0 00:29:00.766 }, 00:29:00.766 { 00:29:00.766 "id": 17, 00:29:00.766 "state": "FREE", 00:29:00.766 "validity": 0.0 00:29:00.766 } 00:29:00.766 ], 00:29:00.766 "read-only": true 00:29:00.766 }, 00:29:00.766 { 00:29:00.766 "name": "cache_device", 00:29:00.766 "type": "bdev", 00:29:00.766 "chunks": [ 00:29:00.766 { 00:29:00.766 "id": 0, 00:29:00.766 "state": "INACTIVE", 00:29:00.766 "utilization": 0.0 00:29:00.766 }, 00:29:00.766 { 00:29:00.766 "id": 1, 00:29:00.766 "state": "CLOSED", 00:29:00.766 "utilization": 1.0 00:29:00.766 }, 00:29:00.766 { 00:29:00.766 "id": 2, 00:29:00.766 "state": "CLOSED", 00:29:00.766 "utilization": 1.0 00:29:00.766 }, 00:29:00.766 { 00:29:00.766 "id": 3, 00:29:00.766 "state": "OPEN", 00:29:00.766 "utilization": 0.001953125 00:29:00.766 }, 00:29:00.766 { 00:29:00.766 "id": 4, 00:29:00.766 "state": "OPEN", 00:29:00.766 "utilization": 0.0 00:29:00.766 } 00:29:00.766 ], 00:29:00.766 "read-only": true 00:29:00.766 }, 00:29:00.766 { 00:29:00.766 "name": "verbose_mode", 00:29:00.766 "value": true, 00:29:00.766 "unit": "", 00:29:00.766 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:00.766 }, 00:29:00.766 { 00:29:00.766 "name": "prep_upgrade_on_shutdown", 00:29:00.766 "value": false, 00:29:00.766 "unit": "", 00:29:00.766 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:00.766 } 00:29:00.766 ] 00:29:00.766 } 00:29:00.766 05:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:01.025 [2024-11-28 05:18:30.053373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.025 [2024-11-28 05:18:30.053409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:01.025 [2024-11-28 05:18:30.053418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:01.025 [2024-11-28 05:18:30.053424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.025 [2024-11-28 05:18:30.053442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.025 [2024-11-28 05:18:30.053449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:01.025 [2024-11-28 05:18:30.053456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:01.025 [2024-11-28 05:18:30.053463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.025 [2024-11-28 05:18:30.053478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.025 [2024-11-28 05:18:30.053485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:01.025 [2024-11-28 05:18:30.053491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:01.025 [2024-11-28 05:18:30.053497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.025 [2024-11-28 05:18:30.053564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.159 ms, result 0 00:29:01.025 true 00:29:01.025 05:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:01.025 05:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:01.025 05:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:01.025 05:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:01.025 05:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:01.025 05:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:01.283 [2024-11-28 05:18:30.457698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.283 [2024-11-28 05:18:30.457733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:01.283 [2024-11-28 05:18:30.457742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:01.283 [2024-11-28 05:18:30.457748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.283 [2024-11-28 05:18:30.457765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.283 [2024-11-28 05:18:30.457771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:01.284 [2024-11-28 05:18:30.457777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:01.284 [2024-11-28 05:18:30.457784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.284 [2024-11-28 05:18:30.457799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.284 [2024-11-28 05:18:30.457805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:01.284 [2024-11-28 05:18:30.457811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:01.284 [2024-11-28 05:18:30.457817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.284 [2024-11-28 05:18:30.457860] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.153 ms, result 0 00:29:01.284 true 00:29:01.284 05:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:01.543 { 00:29:01.543 "name": "ftl", 00:29:01.543 "properties": [ 00:29:01.543 { 00:29:01.543 "name": "superblock_version", 00:29:01.543 "value": 5, 00:29:01.543 "read-only": true 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "name": "base_device", 00:29:01.543 "bands": [ 00:29:01.543 { 00:29:01.543 "id": 0, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 1, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 2, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 3, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 4, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 5, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 6, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 7, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 8, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 9, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 10, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 11, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 12, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 13, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.543 }, 00:29:01.543 { 00:29:01.543 "id": 14, 00:29:01.543 "state": "FREE", 00:29:01.543 "validity": 0.0 00:29:01.544 }, 00:29:01.544 { 00:29:01.544 "id": 15, 00:29:01.544 "state": "FREE", 00:29:01.544 "validity": 0.0 00:29:01.544 }, 00:29:01.544 { 00:29:01.544 "id": 16, 00:29:01.544 "state": "FREE", 00:29:01.544 "validity": 0.0 00:29:01.544 }, 00:29:01.544 { 00:29:01.544 "id": 17, 00:29:01.544 "state": "FREE", 00:29:01.544 "validity": 0.0 00:29:01.544 } 00:29:01.544 ], 00:29:01.544 "read-only": true 00:29:01.544 }, 00:29:01.544 { 00:29:01.544 "name": "cache_device", 00:29:01.544 "type": "bdev", 00:29:01.544 "chunks": [ 00:29:01.544 { 00:29:01.544 "id": 0, 00:29:01.544 "state": "INACTIVE", 00:29:01.544 "utilization": 0.0 00:29:01.544 }, 00:29:01.544 { 00:29:01.544 "id": 1, 00:29:01.544 "state": "CLOSED", 00:29:01.544 "utilization": 1.0 00:29:01.544 }, 00:29:01.544 { 00:29:01.544 "id": 2, 00:29:01.544 "state": "CLOSED", 00:29:01.544 "utilization": 1.0 00:29:01.544 }, 00:29:01.544 { 00:29:01.544 "id": 3, 00:29:01.544 "state": "OPEN", 00:29:01.544 "utilization": 0.001953125 00:29:01.544 }, 00:29:01.544 { 00:29:01.544 "id": 4, 00:29:01.544 "state": "OPEN", 00:29:01.544 "utilization": 0.0 00:29:01.544 } 00:29:01.544 ], 00:29:01.544 "read-only": true 00:29:01.544 }, 00:29:01.544 { 00:29:01.544 "name": "verbose_mode", 00:29:01.544 "value": true, 00:29:01.544 "unit": "", 00:29:01.544 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:01.544 }, 00:29:01.544 { 00:29:01.544 "name": "prep_upgrade_on_shutdown", 00:29:01.544 "value": true, 00:29:01.544 "unit": "", 00:29:01.544 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:01.544 } 00:29:01.544 ] 00:29:01.544 } 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93761 ]] 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93761 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93761 ']' 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93761 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93761 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93761' 00:29:01.544 killing process with pid 93761 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93761 00:29:01.544 05:18:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93761 00:29:01.804 [2024-11-28 05:18:30.831648] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:01.804 [2024-11-28 05:18:30.835490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.804 [2024-11-28 05:18:30.835526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:01.804 [2024-11-28 05:18:30.835536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:01.804 [2024-11-28 05:18:30.835543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.804 [2024-11-28 05:18:30.835561] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:01.804 [2024-11-28 05:18:30.836071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.804 [2024-11-28 05:18:30.836095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:01.804 [2024-11-28 05:18:30.836103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.499 ms 00:29:01.804 [2024-11-28 05:18:30.836110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.939 [2024-11-28 05:18:38.829476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.829550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:09.940 [2024-11-28 05:18:38.829563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7993.321 ms 00:29:09.940 [2024-11-28 05:18:38.829571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.830652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.830671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:09.940 [2024-11-28 05:18:38.830679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.068 ms 00:29:09.940 [2024-11-28 05:18:38.830685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.831562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.831588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:09.940 [2024-11-28 05:18:38.831596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.855 ms 00:29:09.940 [2024-11-28 05:18:38.831603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.833529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.833574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:09.940 [2024-11-28 05:18:38.833583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.885 ms 00:29:09.940 [2024-11-28 05:18:38.833590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.835803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.835831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:09.940 [2024-11-28 05:18:38.835840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.187 ms 00:29:09.940 [2024-11-28 05:18:38.835851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.835924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.835933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:09.940 [2024-11-28 05:18:38.835940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:29:09.940 [2024-11-28 05:18:38.835947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.837940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.837965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:09.940 [2024-11-28 05:18:38.837973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.981 ms 00:29:09.940 [2024-11-28 05:18:38.837979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.840064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.840089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:09.940 [2024-11-28 05:18:38.840096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.061 ms 00:29:09.940 [2024-11-28 05:18:38.840102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.841937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.841963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:09.940 [2024-11-28 05:18:38.841970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.812 ms 00:29:09.940 [2024-11-28 05:18:38.841976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.843549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.843575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:09.940 [2024-11-28 05:18:38.843582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.526 ms 00:29:09.940 [2024-11-28 05:18:38.843588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.843611] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:09.940 [2024-11-28 05:18:38.843621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:09.940 [2024-11-28 05:18:38.843629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:09.940 [2024-11-28 05:18:38.843636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:09.940 [2024-11-28 05:18:38.843642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:09.940 [2024-11-28 05:18:38.843735] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:09.940 [2024-11-28 05:18:38.843741] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 79b7e35d-cd40-48e3-8543-1201adf9e9ba 00:29:09.940 [2024-11-28 05:18:38.843747] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:09.940 [2024-11-28 05:18:38.843757] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:09.940 [2024-11-28 05:18:38.843763] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:09.940 [2024-11-28 05:18:38.843770] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:09.940 [2024-11-28 05:18:38.843775] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:09.940 [2024-11-28 05:18:38.843782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:09.940 [2024-11-28 05:18:38.843789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:09.940 [2024-11-28 05:18:38.843795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:09.940 [2024-11-28 05:18:38.843800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:09.940 [2024-11-28 05:18:38.843807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.843814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:09.940 [2024-11-28 05:18:38.843821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:29:09.940 [2024-11-28 05:18:38.843827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.845605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.845629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:09.940 [2024-11-28 05:18:38.845636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.765 ms 00:29:09.940 [2024-11-28 05:18:38.845642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.845729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.940 [2024-11-28 05:18:38.845736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:09.940 [2024-11-28 05:18:38.845742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:29:09.940 [2024-11-28 05:18:38.845748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.851770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.940 [2024-11-28 05:18:38.851805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:09.940 [2024-11-28 05:18:38.851814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.940 [2024-11-28 05:18:38.851820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.851844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.940 [2024-11-28 05:18:38.851852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:09.940 [2024-11-28 05:18:38.851858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.940 [2024-11-28 05:18:38.851865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.851919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.940 [2024-11-28 05:18:38.851927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:09.940 [2024-11-28 05:18:38.851934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.940 [2024-11-28 05:18:38.851942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.851956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.940 [2024-11-28 05:18:38.851963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:09.940 [2024-11-28 05:18:38.851973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.940 [2024-11-28 05:18:38.851983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.862996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.940 [2024-11-28 05:18:38.863034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:09.940 [2024-11-28 05:18:38.863043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.940 [2024-11-28 05:18:38.863050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.940 [2024-11-28 05:18:38.871649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.940 [2024-11-28 05:18:38.871692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:09.940 [2024-11-28 05:18:38.871700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.941 [2024-11-28 05:18:38.871709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.941 [2024-11-28 05:18:38.871770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.941 [2024-11-28 05:18:38.871782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:09.941 [2024-11-28 05:18:38.871788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.941 [2024-11-28 05:18:38.871795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.941 [2024-11-28 05:18:38.871823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.941 [2024-11-28 05:18:38.871830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:09.941 [2024-11-28 05:18:38.871837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.941 [2024-11-28 05:18:38.871844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.941 [2024-11-28 05:18:38.871901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.941 [2024-11-28 05:18:38.871910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:09.941 [2024-11-28 05:18:38.871919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.941 [2024-11-28 05:18:38.871926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.941 [2024-11-28 05:18:38.871950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.941 [2024-11-28 05:18:38.871957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:09.941 [2024-11-28 05:18:38.871964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.941 [2024-11-28 05:18:38.871969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.941 [2024-11-28 05:18:38.872005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.941 [2024-11-28 05:18:38.872013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:09.941 [2024-11-28 05:18:38.872022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.941 [2024-11-28 05:18:38.872028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.941 [2024-11-28 05:18:38.872070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:09.941 [2024-11-28 05:18:38.872084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:09.941 [2024-11-28 05:18:38.872091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:09.941 [2024-11-28 05:18:38.872098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.941 [2024-11-28 05:18:38.872226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8036.665 ms, result 0 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94252 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94252 00:29:10.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94252 ']' 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:10.886 05:18:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:10.886 [2024-11-28 05:18:40.031125] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:10.886 [2024-11-28 05:18:40.031285] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94252 ] 00:29:11.148 [2024-11-28 05:18:40.180056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.148 [2024-11-28 05:18:40.212212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.428 [2024-11-28 05:18:40.550998] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:11.428 [2024-11-28 05:18:40.551100] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:11.784 [2024-11-28 05:18:40.703580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.784 [2024-11-28 05:18:40.703644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:11.784 [2024-11-28 05:18:40.703662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:11.784 [2024-11-28 05:18:40.703671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.784 [2024-11-28 05:18:40.703734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.784 [2024-11-28 05:18:40.703747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:11.784 [2024-11-28 05:18:40.703757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:29:11.784 [2024-11-28 05:18:40.703765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.784 [2024-11-28 05:18:40.703794] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:11.784 [2024-11-28 05:18:40.704129] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:11.784 [2024-11-28 05:18:40.704161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.784 [2024-11-28 05:18:40.704195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:11.784 [2024-11-28 05:18:40.704206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.372 ms 00:29:11.784 [2024-11-28 05:18:40.704215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.784 [2024-11-28 05:18:40.706060] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:11.784 [2024-11-28 05:18:40.709954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.784 [2024-11-28 05:18:40.710023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:11.784 [2024-11-28 05:18:40.710036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.896 ms 00:29:11.784 [2024-11-28 05:18:40.710045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.784 [2024-11-28 05:18:40.710136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.784 [2024-11-28 05:18:40.710147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:11.784 [2024-11-28 05:18:40.710158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:11.784 [2024-11-28 05:18:40.710168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.784 [2024-11-28 05:18:40.718961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.784 [2024-11-28 05:18:40.719009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:11.784 [2024-11-28 05:18:40.719022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.716 ms 00:29:11.784 [2024-11-28 05:18:40.719034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.784 [2024-11-28 05:18:40.719089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.784 [2024-11-28 05:18:40.719097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:11.784 [2024-11-28 05:18:40.719107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:11.784 [2024-11-28 05:18:40.719115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.784 [2024-11-28 05:18:40.719232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.784 [2024-11-28 05:18:40.719253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:11.784 [2024-11-28 05:18:40.719262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:11.784 [2024-11-28 05:18:40.719273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.784 [2024-11-28 05:18:40.719300] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:11.784 [2024-11-28 05:18:40.721481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.784 [2024-11-28 05:18:40.721532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:11.784 [2024-11-28 05:18:40.721558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.186 ms 00:29:11.784 [2024-11-28 05:18:40.721566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.784 [2024-11-28 05:18:40.721612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.784 [2024-11-28 05:18:40.721621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:11.784 [2024-11-28 05:18:40.721635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:11.784 [2024-11-28 05:18:40.721644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.784 [2024-11-28 05:18:40.721666] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:11.784 [2024-11-28 05:18:40.721693] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:11.785 [2024-11-28 05:18:40.721732] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:11.785 [2024-11-28 05:18:40.721752] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:11.785 [2024-11-28 05:18:40.721862] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:11.785 [2024-11-28 05:18:40.721874] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:11.785 [2024-11-28 05:18:40.721886] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:11.785 [2024-11-28 05:18:40.721898] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:11.785 [2024-11-28 05:18:40.721909] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:11.785 [2024-11-28 05:18:40.721918] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:11.785 [2024-11-28 05:18:40.721925] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:11.785 [2024-11-28 05:18:40.721934] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:11.785 [2024-11-28 05:18:40.721943] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:11.785 [2024-11-28 05:18:40.721955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.785 [2024-11-28 05:18:40.721966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:11.785 [2024-11-28 05:18:40.721974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.292 ms 00:29:11.785 [2024-11-28 05:18:40.721981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.785 [2024-11-28 05:18:40.722068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.785 [2024-11-28 05:18:40.722077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:11.785 [2024-11-28 05:18:40.722084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:11.785 [2024-11-28 05:18:40.722095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.785 [2024-11-28 05:18:40.722221] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:11.785 [2024-11-28 05:18:40.722244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:11.785 [2024-11-28 05:18:40.722258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:11.785 [2024-11-28 05:18:40.722268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.785 [2024-11-28 05:18:40.722278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:11.785 [2024-11-28 05:18:40.722286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:11.785 [2024-11-28 05:18:40.722296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:11.785 [2024-11-28 05:18:40.722306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:11.785 [2024-11-28 05:18:40.722314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:11.785 [2024-11-28 05:18:40.722322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.785 [2024-11-28 05:18:40.722330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:11.785 [2024-11-28 05:18:40.722337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:11.785 [2024-11-28 05:18:40.722345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.785 [2024-11-28 05:18:40.722353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:11.785 [2024-11-28 05:18:40.722362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:11.785 [2024-11-28 05:18:40.722379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.785 [2024-11-28 05:18:40.722388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:11.785 [2024-11-28 05:18:40.722397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:11.785 [2024-11-28 05:18:40.722404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.785 [2024-11-28 05:18:40.722414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:11.785 [2024-11-28 05:18:40.722422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:11.785 [2024-11-28 05:18:40.722429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.785 [2024-11-28 05:18:40.722437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:11.785 [2024-11-28 05:18:40.722445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:11.785 [2024-11-28 05:18:40.722452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.785 [2024-11-28 05:18:40.722460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:11.785 [2024-11-28 05:18:40.722468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:11.785 [2024-11-28 05:18:40.722476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.785 [2024-11-28 05:18:40.722484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:11.785 [2024-11-28 05:18:40.722492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:11.785 [2024-11-28 05:18:40.722502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.785 [2024-11-28 05:18:40.722513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:11.785 [2024-11-28 05:18:40.722522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:11.785 [2024-11-28 05:18:40.722530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.785 [2024-11-28 05:18:40.722537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:11.785 [2024-11-28 05:18:40.722544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:11.785 [2024-11-28 05:18:40.722551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.785 [2024-11-28 05:18:40.722559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:11.785 [2024-11-28 05:18:40.722567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:11.785 [2024-11-28 05:18:40.722574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.785 [2024-11-28 05:18:40.722581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:11.785 [2024-11-28 05:18:40.722588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:11.785 [2024-11-28 05:18:40.722595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.785 [2024-11-28 05:18:40.722602] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:11.785 [2024-11-28 05:18:40.722610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:11.785 [2024-11-28 05:18:40.722618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:11.785 [2024-11-28 05:18:40.722625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.785 [2024-11-28 05:18:40.722637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:11.785 [2024-11-28 05:18:40.722644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:11.785 [2024-11-28 05:18:40.722652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:11.785 [2024-11-28 05:18:40.722659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:11.785 [2024-11-28 05:18:40.722666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:11.785 [2024-11-28 05:18:40.722673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:11.785 [2024-11-28 05:18:40.722681] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:11.785 [2024-11-28 05:18:40.722691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:11.785 [2024-11-28 05:18:40.722700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:11.785 [2024-11-28 05:18:40.722707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:11.785 [2024-11-28 05:18:40.722715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:11.785 [2024-11-28 05:18:40.722725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:11.785 [2024-11-28 05:18:40.722732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:11.785 [2024-11-28 05:18:40.722740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:11.786 [2024-11-28 05:18:40.722750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:11.786 [2024-11-28 05:18:40.722757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:11.786 [2024-11-28 05:18:40.722767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:11.786 [2024-11-28 05:18:40.722774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:11.786 [2024-11-28 05:18:40.722781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:11.786 [2024-11-28 05:18:40.722789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:11.786 [2024-11-28 05:18:40.722797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:11.786 [2024-11-28 05:18:40.722805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:11.786 [2024-11-28 05:18:40.722812] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:11.786 [2024-11-28 05:18:40.722821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:11.786 [2024-11-28 05:18:40.722829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:11.786 [2024-11-28 05:18:40.722837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:11.786 [2024-11-28 05:18:40.722845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:11.786 [2024-11-28 05:18:40.722860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:11.786 [2024-11-28 05:18:40.722868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.786 [2024-11-28 05:18:40.722879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:11.786 [2024-11-28 05:18:40.722887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.734 ms 00:29:11.786 [2024-11-28 05:18:40.722899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.786 [2024-11-28 05:18:40.722947] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:11.786 [2024-11-28 05:18:40.722966] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:15.992 [2024-11-28 05:18:44.749138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.992 [2024-11-28 05:18:44.749243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:15.993 [2024-11-28 05:18:44.749261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4026.178 ms 00:29:15.993 [2024-11-28 05:18:44.749279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.763456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.763516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:15.993 [2024-11-28 05:18:44.763532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.071 ms 00:29:15.993 [2024-11-28 05:18:44.763542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.763604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.763615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:15.993 [2024-11-28 05:18:44.763627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:15.993 [2024-11-28 05:18:44.763649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.776401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.776455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:15.993 [2024-11-28 05:18:44.776468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.694 ms 00:29:15.993 [2024-11-28 05:18:44.776476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.776524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.776533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:15.993 [2024-11-28 05:18:44.776549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:15.993 [2024-11-28 05:18:44.776562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.777110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.777157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:15.993 [2024-11-28 05:18:44.777170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.486 ms 00:29:15.993 [2024-11-28 05:18:44.777197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.777256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.777267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:15.993 [2024-11-28 05:18:44.777277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:15.993 [2024-11-28 05:18:44.777297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.785674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.785730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:15.993 [2024-11-28 05:18:44.785759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.351 ms 00:29:15.993 [2024-11-28 05:18:44.785773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.797752] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:15.993 [2024-11-28 05:18:44.797820] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:15.993 [2024-11-28 05:18:44.797843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.797855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:15.993 [2024-11-28 05:18:44.797868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.938 ms 00:29:15.993 [2024-11-28 05:18:44.797877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.803524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.803577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:15.993 [2024-11-28 05:18:44.803591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.608 ms 00:29:15.993 [2024-11-28 05:18:44.803612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.806512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.806560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:15.993 [2024-11-28 05:18:44.806572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.835 ms 00:29:15.993 [2024-11-28 05:18:44.806580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.809301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.809357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:15.993 [2024-11-28 05:18:44.809368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.693 ms 00:29:15.993 [2024-11-28 05:18:44.809376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.809814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.809849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:15.993 [2024-11-28 05:18:44.809865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.372 ms 00:29:15.993 [2024-11-28 05:18:44.809878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.833212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.833258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:15.993 [2024-11-28 05:18:44.833271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.302 ms 00:29:15.993 [2024-11-28 05:18:44.833280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.841334] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:15.993 [2024-11-28 05:18:44.842309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.842348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:15.993 [2024-11-28 05:18:44.842361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.975 ms 00:29:15.993 [2024-11-28 05:18:44.842369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.842452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.842464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:15.993 [2024-11-28 05:18:44.842474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:15.993 [2024-11-28 05:18:44.842482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.842529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.842539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:15.993 [2024-11-28 05:18:44.842551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:15.993 [2024-11-28 05:18:44.842559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.842582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.842591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:15.993 [2024-11-28 05:18:44.842600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:15.993 [2024-11-28 05:18:44.842608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.842644] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:15.993 [2024-11-28 05:18:44.842657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.842665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:15.993 [2024-11-28 05:18:44.842677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:15.993 [2024-11-28 05:18:44.842685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.847762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.847810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:15.993 [2024-11-28 05:18:44.847821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.054 ms 00:29:15.993 [2024-11-28 05:18:44.847829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.847916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:44.847927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:15.993 [2024-11-28 05:18:44.847937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:29:15.993 [2024-11-28 05:18:44.847948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:44.849136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4145.116 ms, result 0 00:29:15.993 [2024-11-28 05:18:44.862731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.993 [2024-11-28 05:18:44.878781] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:15.993 [2024-11-28 05:18:44.886865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:15.993 05:18:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.993 05:18:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:15.993 05:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:15.993 05:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:15.993 05:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:15.993 [2024-11-28 05:18:45.122952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:45.123015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:15.993 [2024-11-28 05:18:45.123030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:15.993 [2024-11-28 05:18:45.123039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.993 [2024-11-28 05:18:45.123065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.993 [2024-11-28 05:18:45.123074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:15.994 [2024-11-28 05:18:45.123087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:15.994 [2024-11-28 05:18:45.123095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.994 [2024-11-28 05:18:45.123121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.994 [2024-11-28 05:18:45.123131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:15.994 [2024-11-28 05:18:45.123140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:15.994 [2024-11-28 05:18:45.123148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.994 [2024-11-28 05:18:45.123234] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.252 ms, result 0 00:29:15.994 true 00:29:15.994 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:16.254 { 00:29:16.254 "name": "ftl", 00:29:16.254 "properties": [ 00:29:16.254 { 00:29:16.254 "name": "superblock_version", 00:29:16.254 "value": 5, 00:29:16.254 "read-only": true 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "name": "base_device", 00:29:16.254 "bands": [ 00:29:16.254 { 00:29:16.254 "id": 0, 00:29:16.254 "state": "CLOSED", 00:29:16.254 "validity": 1.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 1, 00:29:16.254 "state": "CLOSED", 00:29:16.254 "validity": 1.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 2, 00:29:16.254 "state": "CLOSED", 00:29:16.254 "validity": 0.007843137254901933 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 3, 00:29:16.254 "state": "FREE", 00:29:16.254 "validity": 0.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 4, 00:29:16.254 "state": "FREE", 00:29:16.254 "validity": 0.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 5, 00:29:16.254 "state": "FREE", 00:29:16.254 "validity": 0.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 6, 00:29:16.254 "state": "FREE", 00:29:16.254 "validity": 0.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 7, 00:29:16.254 "state": "FREE", 00:29:16.254 "validity": 0.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 8, 00:29:16.254 "state": "FREE", 00:29:16.254 "validity": 0.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 9, 00:29:16.254 "state": "FREE", 00:29:16.254 "validity": 0.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 10, 00:29:16.254 "state": "FREE", 00:29:16.254 "validity": 0.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 11, 00:29:16.254 "state": "FREE", 00:29:16.254 "validity": 0.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 12, 00:29:16.254 "state": "FREE", 00:29:16.254 "validity": 0.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 13, 00:29:16.254 "state": "FREE", 00:29:16.254 "validity": 0.0 00:29:16.254 }, 00:29:16.254 { 00:29:16.254 "id": 14, 00:29:16.254 "state": "FREE", 00:29:16.254 "validity": 0.0 00:29:16.255 }, 00:29:16.255 { 00:29:16.255 "id": 15, 00:29:16.255 "state": "FREE", 00:29:16.255 "validity": 0.0 00:29:16.255 }, 00:29:16.255 { 00:29:16.255 "id": 16, 00:29:16.255 "state": "FREE", 00:29:16.255 "validity": 0.0 00:29:16.255 }, 00:29:16.255 { 00:29:16.255 "id": 17, 00:29:16.255 "state": "FREE", 00:29:16.255 "validity": 0.0 00:29:16.255 } 00:29:16.255 ], 00:29:16.255 "read-only": true 00:29:16.255 }, 00:29:16.255 { 00:29:16.255 "name": "cache_device", 00:29:16.255 "type": "bdev", 00:29:16.255 "chunks": [ 00:29:16.255 { 00:29:16.255 "id": 0, 00:29:16.255 "state": "INACTIVE", 00:29:16.255 "utilization": 0.0 00:29:16.255 }, 00:29:16.255 { 00:29:16.255 "id": 1, 00:29:16.255 "state": "OPEN", 00:29:16.255 "utilization": 0.0 00:29:16.255 }, 00:29:16.255 { 00:29:16.255 "id": 2, 00:29:16.255 "state": "OPEN", 00:29:16.255 "utilization": 0.0 00:29:16.255 }, 00:29:16.255 { 00:29:16.255 "id": 3, 00:29:16.255 "state": "FREE", 00:29:16.255 "utilization": 0.0 00:29:16.255 }, 00:29:16.255 { 00:29:16.255 "id": 4, 00:29:16.255 "state": "FREE", 00:29:16.255 "utilization": 0.0 00:29:16.255 } 00:29:16.255 ], 00:29:16.255 "read-only": true 00:29:16.255 }, 00:29:16.255 { 00:29:16.255 "name": "verbose_mode", 00:29:16.255 "value": true, 00:29:16.255 "unit": "", 00:29:16.255 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:16.255 }, 00:29:16.255 { 00:29:16.255 "name": "prep_upgrade_on_shutdown", 00:29:16.255 "value": false, 00:29:16.255 "unit": "", 00:29:16.255 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:16.255 } 00:29:16.255 ] 00:29:16.255 } 00:29:16.255 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:16.255 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:16.255 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:16.515 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:16.515 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:16.515 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:16.515 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:16.515 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:16.776 Validate MD5 checksum, iteration 1 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:16.776 05:18:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:16.776 [2024-11-28 05:18:45.875705] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:16.776 [2024-11-28 05:18:45.875845] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94321 ] 00:29:16.776 [2024-11-28 05:18:46.021761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.036 [2024-11-28 05:18:46.062087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.421  [2024-11-28T05:18:48.646Z] Copying: 518/1024 [MB] (518 MBps) [2024-11-28T05:18:49.216Z] Copying: 1024/1024 [MB] (average 519 MBps) 00:29:19.932 00:29:19.932 05:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:19.932 05:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:22.461 05:18:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:22.461 Validate MD5 checksum, iteration 2 00:29:22.461 05:18:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5b8854f635275497221ee64dd81559b2 00:29:22.461 05:18:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5b8854f635275497221ee64dd81559b2 != \5\b\8\8\5\4\f\6\3\5\2\7\5\4\9\7\2\2\1\e\e\6\4\d\d\8\1\5\5\9\b\2 ]] 00:29:22.461 05:18:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:22.461 05:18:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:22.461 05:18:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:22.461 05:18:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:22.461 05:18:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:22.461 05:18:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:22.461 05:18:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:22.461 05:18:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:22.461 05:18:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:22.461 [2024-11-28 05:18:51.308482] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:22.461 [2024-11-28 05:18:51.308589] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94388 ] 00:29:22.461 [2024-11-28 05:18:51.448338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.461 [2024-11-28 05:18:51.470308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.847  [2024-11-28T05:18:53.703Z] Copying: 643/1024 [MB] (643 MBps) [2024-11-28T05:18:53.964Z] Copying: 1024/1024 [MB] (average 625 MBps) 00:29:24.680 00:29:24.680 05:18:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:24.680 05:18:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c79bf9ce6f63922150d6aa2d25d6ed9d 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c79bf9ce6f63922150d6aa2d25d6ed9d != \c\7\9\b\f\9\c\e\6\f\6\3\9\2\2\1\5\0\d\6\a\a\2\d\2\5\d\6\e\d\9\d ]] 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94252 ]] 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94252 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94439 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94439 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94439 ']' 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:27.224 05:18:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:27.224 [2024-11-28 05:18:56.183024] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:27.224 [2024-11-28 05:18:56.183138] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94439 ] 00:29:27.224 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 94252 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:27.224 [2024-11-28 05:18:56.328067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.224 [2024-11-28 05:18:56.354165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.486 [2024-11-28 05:18:56.676396] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:27.486 [2024-11-28 05:18:56.676470] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:27.747 [2024-11-28 05:18:56.823993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.747 [2024-11-28 05:18:56.824050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:27.747 [2024-11-28 05:18:56.824068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:27.747 [2024-11-28 05:18:56.824077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.747 [2024-11-28 05:18:56.824132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.747 [2024-11-28 05:18:56.824145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:27.747 [2024-11-28 05:18:56.824154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:27.747 [2024-11-28 05:18:56.824162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.747 [2024-11-28 05:18:56.824199] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:27.747 [2024-11-28 05:18:56.824456] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:27.747 [2024-11-28 05:18:56.824480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.747 [2024-11-28 05:18:56.824490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:27.747 [2024-11-28 05:18:56.824499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.287 ms 00:29:27.747 [2024-11-28 05:18:56.824506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.747 [2024-11-28 05:18:56.824936] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:27.747 [2024-11-28 05:18:56.829942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.747 [2024-11-28 05:18:56.829991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:27.747 [2024-11-28 05:18:56.830002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.008 ms 00:29:27.747 [2024-11-28 05:18:56.830010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.747 [2024-11-28 05:18:56.831299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.747 [2024-11-28 05:18:56.831334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:27.747 [2024-11-28 05:18:56.831345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:27.747 [2024-11-28 05:18:56.831356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.747 [2024-11-28 05:18:56.831633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.747 [2024-11-28 05:18:56.831650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:27.747 [2024-11-28 05:18:56.831660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:29:27.747 [2024-11-28 05:18:56.831667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.747 [2024-11-28 05:18:56.831705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.748 [2024-11-28 05:18:56.831715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:27.748 [2024-11-28 05:18:56.831723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:27.748 [2024-11-28 05:18:56.831730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.748 [2024-11-28 05:18:56.831759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.748 [2024-11-28 05:18:56.831775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:27.748 [2024-11-28 05:18:56.831786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:27.748 [2024-11-28 05:18:56.831798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.748 [2024-11-28 05:18:56.831823] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:27.748 [2024-11-28 05:18:56.832828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.748 [2024-11-28 05:18:56.832861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:27.748 [2024-11-28 05:18:56.832870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.014 ms 00:29:27.748 [2024-11-28 05:18:56.832878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.748 [2024-11-28 05:18:56.832908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.748 [2024-11-28 05:18:56.832925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:27.748 [2024-11-28 05:18:56.832934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:27.748 [2024-11-28 05:18:56.832941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.748 [2024-11-28 05:18:56.832974] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:27.748 [2024-11-28 05:18:56.832995] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:27.748 [2024-11-28 05:18:56.833031] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:27.748 [2024-11-28 05:18:56.833048] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:27.748 [2024-11-28 05:18:56.833156] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:27.748 [2024-11-28 05:18:56.833168] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:27.748 [2024-11-28 05:18:56.833195] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:27.748 [2024-11-28 05:18:56.833207] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:27.748 [2024-11-28 05:18:56.833217] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:27.748 [2024-11-28 05:18:56.833226] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:27.748 [2024-11-28 05:18:56.833233] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:27.748 [2024-11-28 05:18:56.833244] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:27.748 [2024-11-28 05:18:56.833253] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:27.748 [2024-11-28 05:18:56.833262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.748 [2024-11-28 05:18:56.833272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:27.748 [2024-11-28 05:18:56.833281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:29:27.748 [2024-11-28 05:18:56.833288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.748 [2024-11-28 05:18:56.833372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.748 [2024-11-28 05:18:56.833389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:27.748 [2024-11-28 05:18:56.833401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:27.748 [2024-11-28 05:18:56.833408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.748 [2024-11-28 05:18:56.833510] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:27.748 [2024-11-28 05:18:56.833522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:27.748 [2024-11-28 05:18:56.833545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:27.748 [2024-11-28 05:18:56.833558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.748 [2024-11-28 05:18:56.833571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:27.748 [2024-11-28 05:18:56.833579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:27.748 [2024-11-28 05:18:56.833587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:27.748 [2024-11-28 05:18:56.833595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:27.748 [2024-11-28 05:18:56.833605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:27.748 [2024-11-28 05:18:56.833613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.748 [2024-11-28 05:18:56.833621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:27.748 [2024-11-28 05:18:56.833629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:27.748 [2024-11-28 05:18:56.833636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.748 [2024-11-28 05:18:56.833645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:27.748 [2024-11-28 05:18:56.833658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:27.748 [2024-11-28 05:18:56.833666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.748 [2024-11-28 05:18:56.833676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:27.748 [2024-11-28 05:18:56.833683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:27.748 [2024-11-28 05:18:56.833691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.748 [2024-11-28 05:18:56.833700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:27.748 [2024-11-28 05:18:56.833707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:27.748 [2024-11-28 05:18:56.833715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:27.748 [2024-11-28 05:18:56.833722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:27.748 [2024-11-28 05:18:56.833729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:27.748 [2024-11-28 05:18:56.833736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:27.748 [2024-11-28 05:18:56.833744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:27.748 [2024-11-28 05:18:56.833752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:27.748 [2024-11-28 05:18:56.833759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:27.748 [2024-11-28 05:18:56.833767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:27.748 [2024-11-28 05:18:56.833774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:27.748 [2024-11-28 05:18:56.833784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:27.748 [2024-11-28 05:18:56.833792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:27.748 [2024-11-28 05:18:56.833800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:27.748 [2024-11-28 05:18:56.833808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.748 [2024-11-28 05:18:56.833815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:27.748 [2024-11-28 05:18:56.833823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:27.748 [2024-11-28 05:18:56.833832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.748 [2024-11-28 05:18:56.833840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:27.748 [2024-11-28 05:18:56.833848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:27.748 [2024-11-28 05:18:56.833856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.748 [2024-11-28 05:18:56.833863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:27.748 [2024-11-28 05:18:56.833870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:27.748 [2024-11-28 05:18:56.833878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.748 [2024-11-28 05:18:56.833885] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:27.748 [2024-11-28 05:18:56.833894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:27.748 [2024-11-28 05:18:56.833903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:27.748 [2024-11-28 05:18:56.833914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:27.748 [2024-11-28 05:18:56.833929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:27.748 [2024-11-28 05:18:56.833938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:27.748 [2024-11-28 05:18:56.833946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:27.748 [2024-11-28 05:18:56.833955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:27.748 [2024-11-28 05:18:56.833962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:27.748 [2024-11-28 05:18:56.833970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:27.748 [2024-11-28 05:18:56.833979] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:27.748 [2024-11-28 05:18:56.833989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:27.748 [2024-11-28 05:18:56.834000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:27.748 [2024-11-28 05:18:56.834008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:27.748 [2024-11-28 05:18:56.834016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:27.748 [2024-11-28 05:18:56.834023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:27.748 [2024-11-28 05:18:56.834031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:27.748 [2024-11-28 05:18:56.834039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:27.748 [2024-11-28 05:18:56.834046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:27.748 [2024-11-28 05:18:56.834055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:27.748 [2024-11-28 05:18:56.834062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:27.748 [2024-11-28 05:18:56.834069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:27.749 [2024-11-28 05:18:56.834076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:27.749 [2024-11-28 05:18:56.834083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:27.749 [2024-11-28 05:18:56.834091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:27.749 [2024-11-28 05:18:56.834099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:27.749 [2024-11-28 05:18:56.834106] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:27.749 [2024-11-28 05:18:56.834114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:27.749 [2024-11-28 05:18:56.834122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:27.749 [2024-11-28 05:18:56.834134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:27.749 [2024-11-28 05:18:56.834142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:27.749 [2024-11-28 05:18:56.834149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:27.749 [2024-11-28 05:18:56.834156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.834166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:27.749 [2024-11-28 05:18:56.834174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.715 ms 00:29:27.749 [2024-11-28 05:18:56.834205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.844939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.844980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:27.749 [2024-11-28 05:18:56.844993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.679 ms 00:29:27.749 [2024-11-28 05:18:56.845002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.845044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.845053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:27.749 [2024-11-28 05:18:56.845069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:27.749 [2024-11-28 05:18:56.845076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.857891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.857933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:27.749 [2024-11-28 05:18:56.857945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.756 ms 00:29:27.749 [2024-11-28 05:18:56.857953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.858000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.858012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:27.749 [2024-11-28 05:18:56.858021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:27.749 [2024-11-28 05:18:56.858031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.858124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.858138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:27.749 [2024-11-28 05:18:56.858146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:29:27.749 [2024-11-28 05:18:56.858154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.858214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.858224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:27.749 [2024-11-28 05:18:56.858232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:29:27.749 [2024-11-28 05:18:56.858242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.866873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.866913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:27.749 [2024-11-28 05:18:56.866923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.605 ms 00:29:27.749 [2024-11-28 05:18:56.866932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.867026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.867041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:27.749 [2024-11-28 05:18:56.867054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:27.749 [2024-11-28 05:18:56.867062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.886309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.886370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:27.749 [2024-11-28 05:18:56.886399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.223 ms 00:29:27.749 [2024-11-28 05:18:56.886411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.888135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.888201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:27.749 [2024-11-28 05:18:56.888220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.331 ms 00:29:27.749 [2024-11-28 05:18:56.888231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.913128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.913202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:27.749 [2024-11-28 05:18:56.913222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.843 ms 00:29:27.749 [2024-11-28 05:18:56.913233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.913395] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:27.749 [2024-11-28 05:18:56.913540] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:27.749 [2024-11-28 05:18:56.913674] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:27.749 [2024-11-28 05:18:56.913799] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:27.749 [2024-11-28 05:18:56.913820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.913830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:27.749 [2024-11-28 05:18:56.913847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.535 ms 00:29:27.749 [2024-11-28 05:18:56.913863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.913926] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:27.749 [2024-11-28 05:18:56.913941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.913950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:27.749 [2024-11-28 05:18:56.913961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:27.749 [2024-11-28 05:18:56.913970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.918007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.918056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:27.749 [2024-11-28 05:18:56.918077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.008 ms 00:29:27.749 [2024-11-28 05:18:56.918086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.918996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.919038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:27.749 [2024-11-28 05:18:56.919049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:27.749 [2024-11-28 05:18:56.919058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:27.749 [2024-11-28 05:18:56.919135] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:27.749 [2024-11-28 05:18:56.919379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:27.749 [2024-11-28 05:18:56.919404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:27.749 [2024-11-28 05:18:56.919420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.245 ms 00:29:27.749 [2024-11-28 05:18:56.919432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.320 [2024-11-28 05:18:57.595825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.320 [2024-11-28 05:18:57.595944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:28.320 [2024-11-28 05:18:57.595965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 675.967 ms 00:29:28.320 [2024-11-28 05:18:57.595991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.320 [2024-11-28 05:18:57.598385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.320 [2024-11-28 05:18:57.598452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:28.320 [2024-11-28 05:18:57.598465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.718 ms 00:29:28.320 [2024-11-28 05:18:57.598475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.320 [2024-11-28 05:18:57.599648] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:28.320 [2024-11-28 05:18:57.599704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.320 [2024-11-28 05:18:57.599715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:28.320 [2024-11-28 05:18:57.599727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.191 ms 00:29:28.320 [2024-11-28 05:18:57.599736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.320 [2024-11-28 05:18:57.599792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.320 [2024-11-28 05:18:57.599806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:28.320 [2024-11-28 05:18:57.599817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:28.320 [2024-11-28 05:18:57.599825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.320 [2024-11-28 05:18:57.599864] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 680.725 ms, result 0 00:29:28.320 [2024-11-28 05:18:57.599925] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:28.320 [2024-11-28 05:18:57.600113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.320 [2024-11-28 05:18:57.600140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:28.320 [2024-11-28 05:18:57.600151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:29:28.320 [2024-11-28 05:18:57.600159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.248603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.248665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:29.265 [2024-11-28 05:18:58.248681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 647.725 ms 00:29:29.265 [2024-11-28 05:18:58.248689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.250657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.250693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:29.265 [2024-11-28 05:18:58.250702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.578 ms 00:29:29.265 [2024-11-28 05:18:58.250710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.251615] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:29.265 [2024-11-28 05:18:58.251649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.251657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:29.265 [2024-11-28 05:18:58.251666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.912 ms 00:29:29.265 [2024-11-28 05:18:58.251674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.251704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.251714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:29.265 [2024-11-28 05:18:58.251723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:29.265 [2024-11-28 05:18:58.251730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.251765] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 651.849 ms, result 0 00:29:29.265 [2024-11-28 05:18:58.251808] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:29.265 [2024-11-28 05:18:58.251818] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:29.265 [2024-11-28 05:18:58.251828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.251844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:29.265 [2024-11-28 05:18:58.251853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1332.711 ms 00:29:29.265 [2024-11-28 05:18:58.251863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.251893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.251902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:29.265 [2024-11-28 05:18:58.251910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:29.265 [2024-11-28 05:18:58.251917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.260528] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:29.265 [2024-11-28 05:18:58.260636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.260649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:29.265 [2024-11-28 05:18:58.260659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.703 ms 00:29:29.265 [2024-11-28 05:18:58.260667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.261346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.261370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:29.265 [2024-11-28 05:18:58.261379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.613 ms 00:29:29.265 [2024-11-28 05:18:58.261386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.263609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.263634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:29.265 [2024-11-28 05:18:58.263650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.205 ms 00:29:29.265 [2024-11-28 05:18:58.263658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.263702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.263711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:29.265 [2024-11-28 05:18:58.263719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:29.265 [2024-11-28 05:18:58.263726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.263833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.263844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:29.265 [2024-11-28 05:18:58.263855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:29.265 [2024-11-28 05:18:58.263862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.263885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.263894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:29.265 [2024-11-28 05:18:58.263902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:29.265 [2024-11-28 05:18:58.263912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.263941] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:29.265 [2024-11-28 05:18:58.263952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.263960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:29.265 [2024-11-28 05:18:58.263968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:29.265 [2024-11-28 05:18:58.263982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.264036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.265 [2024-11-28 05:18:58.264045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:29.265 [2024-11-28 05:18:58.264059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:29.265 [2024-11-28 05:18:58.264066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.265 [2024-11-28 05:18:58.265417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1440.936 ms, result 0 00:29:29.265 [2024-11-28 05:18:58.281086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.265 [2024-11-28 05:18:58.297086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:29.265 [2024-11-28 05:18:58.305219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:29.527 Validate MD5 checksum, iteration 1 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:29.527 05:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:29.527 [2024-11-28 05:18:58.764774] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:29.527 [2024-11-28 05:18:58.764916] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94473 ] 00:29:29.788 [2024-11-28 05:18:58.913729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.788 [2024-11-28 05:18:58.954076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.175  [2024-11-28T05:19:01.402Z] Copying: 532/1024 [MB] (532 MBps) [2024-11-28T05:19:03.310Z] Copying: 1024/1024 [MB] (average 538 MBps) 00:29:34.026 00:29:34.026 05:19:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:34.026 05:19:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:35.934 05:19:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:35.934 Validate MD5 checksum, iteration 2 00:29:35.934 05:19:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5b8854f635275497221ee64dd81559b2 00:29:35.934 05:19:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5b8854f635275497221ee64dd81559b2 != \5\b\8\8\5\4\f\6\3\5\2\7\5\4\9\7\2\2\1\e\e\6\4\d\d\8\1\5\5\9\b\2 ]] 00:29:35.934 05:19:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:35.934 05:19:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:35.934 05:19:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:35.934 05:19:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:35.934 05:19:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:35.934 05:19:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:35.934 05:19:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:35.934 05:19:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:35.934 05:19:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:35.934 [2024-11-28 05:19:04.864890] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:35.934 [2024-11-28 05:19:04.864999] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94540 ] 00:29:35.934 [2024-11-28 05:19:05.008968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.934 [2024-11-28 05:19:05.033074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.331  [2024-11-28T05:19:07.189Z] Copying: 538/1024 [MB] (538 MBps) [2024-11-28T05:19:08.133Z] Copying: 1024/1024 [MB] (average 573 MBps) 00:29:38.849 00:29:38.849 05:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:38.849 05:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:40.756 05:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:40.757 05:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c79bf9ce6f63922150d6aa2d25d6ed9d 00:29:40.757 05:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c79bf9ce6f63922150d6aa2d25d6ed9d != \c\7\9\b\f\9\c\e\6\f\6\3\9\2\2\1\5\0\d\6\a\a\2\d\2\5\d\6\e\d\9\d ]] 00:29:40.757 05:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:40.757 05:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:40.757 05:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:40.757 05:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:40.757 05:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:40.757 05:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:40.757 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:40.757 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:40.757 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:40.757 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:40.757 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94439 ]] 00:29:40.757 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94439 00:29:40.757 05:19:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94439 ']' 00:29:40.757 05:19:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94439 00:29:40.757 05:19:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:40.757 05:19:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.757 05:19:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94439 00:29:41.016 killing process with pid 94439 00:29:41.016 05:19:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:41.016 05:19:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:41.016 05:19:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94439' 00:29:41.016 05:19:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94439 00:29:41.016 05:19:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94439 00:29:41.016 [2024-11-28 05:19:10.134608] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:41.016 [2024-11-28 05:19:10.138524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.016 [2024-11-28 05:19:10.138678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:41.016 [2024-11-28 05:19:10.138694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:41.016 [2024-11-28 05:19:10.138701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.016 [2024-11-28 05:19:10.138722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:41.016 [2024-11-28 05:19:10.139088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.016 [2024-11-28 05:19:10.139106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:41.016 [2024-11-28 05:19:10.139116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:29:41.016 [2024-11-28 05:19:10.139122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.016 [2024-11-28 05:19:10.139314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.016 [2024-11-28 05:19:10.139322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:41.016 [2024-11-28 05:19:10.139328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.176 ms 00:29:41.016 [2024-11-28 05:19:10.139334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.016 [2024-11-28 05:19:10.140314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.016 [2024-11-28 05:19:10.140334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:41.016 [2024-11-28 05:19:10.140341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.967 ms 00:29:41.016 [2024-11-28 05:19:10.140348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.016 [2024-11-28 05:19:10.141223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.016 [2024-11-28 05:19:10.141232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:41.016 [2024-11-28 05:19:10.141243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.853 ms 00:29:41.016 [2024-11-28 05:19:10.141249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.016 [2024-11-28 05:19:10.142752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.016 [2024-11-28 05:19:10.142840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:41.016 [2024-11-28 05:19:10.142897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.477 ms 00:29:41.016 [2024-11-28 05:19:10.142916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.016 [2024-11-28 05:19:10.144110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.016 [2024-11-28 05:19:10.144203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:41.016 [2024-11-28 05:19:10.144246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.159 ms 00:29:41.016 [2024-11-28 05:19:10.144264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.016 [2024-11-28 05:19:10.144325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.016 [2024-11-28 05:19:10.144429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:41.016 [2024-11-28 05:19:10.144448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:41.016 [2024-11-28 05:19:10.144467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.016 [2024-11-28 05:19:10.145681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.016 [2024-11-28 05:19:10.145769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:41.016 [2024-11-28 05:19:10.145808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.188 ms 00:29:41.016 [2024-11-28 05:19:10.145825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.016 [2024-11-28 05:19:10.146861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.016 [2024-11-28 05:19:10.146959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:41.016 [2024-11-28 05:19:10.146998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.005 ms 00:29:41.016 [2024-11-28 05:19:10.147014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.016 [2024-11-28 05:19:10.147973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.016 [2024-11-28 05:19:10.148088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:41.016 [2024-11-28 05:19:10.148128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.778 ms 00:29:41.016 [2024-11-28 05:19:10.148144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.016 [2024-11-28 05:19:10.149061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.016 [2024-11-28 05:19:10.149134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:41.016 [2024-11-28 05:19:10.149197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.831 ms 00:29:41.016 [2024-11-28 05:19:10.149215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.016 [2024-11-28 05:19:10.149245] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:41.016 [2024-11-28 05:19:10.149318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:41.017 [2024-11-28 05:19:10.149345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:41.017 [2024-11-28 05:19:10.149368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:41.017 [2024-11-28 05:19:10.149391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.149853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:41.017 [2024-11-28 05:19:10.150023] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:41.017 [2024-11-28 05:19:10.150086] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 79b7e35d-cd40-48e3-8543-1201adf9e9ba 00:29:41.017 [2024-11-28 05:19:10.150111] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:41.017 [2024-11-28 05:19:10.150131] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:41.017 [2024-11-28 05:19:10.150145] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:41.017 [2024-11-28 05:19:10.150160] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:41.017 [2024-11-28 05:19:10.150175] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:41.017 [2024-11-28 05:19:10.150200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:41.017 [2024-11-28 05:19:10.150279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:41.017 [2024-11-28 05:19:10.150296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:41.017 [2024-11-28 05:19:10.150310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:41.017 [2024-11-28 05:19:10.150325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.017 [2024-11-28 05:19:10.150340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:41.017 [2024-11-28 05:19:10.150356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.081 ms 00:29:41.017 [2024-11-28 05:19:10.150370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.151602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.017 [2024-11-28 05:19:10.151677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:41.017 [2024-11-28 05:19:10.151744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.207 ms 00:29:41.017 [2024-11-28 05:19:10.151761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.151841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.017 [2024-11-28 05:19:10.151858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:41.017 [2024-11-28 05:19:10.151905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:29:41.017 [2024-11-28 05:19:10.151922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.156401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.017 [2024-11-28 05:19:10.156479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:41.017 [2024-11-28 05:19:10.156518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.017 [2024-11-28 05:19:10.156539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.156571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.017 [2024-11-28 05:19:10.156644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:41.017 [2024-11-28 05:19:10.156662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.017 [2024-11-28 05:19:10.156676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.156737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.017 [2024-11-28 05:19:10.156757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:41.017 [2024-11-28 05:19:10.156772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.017 [2024-11-28 05:19:10.156787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.156839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.017 [2024-11-28 05:19:10.156889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:41.017 [2024-11-28 05:19:10.156944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.017 [2024-11-28 05:19:10.156961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.164892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.017 [2024-11-28 05:19:10.164991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:41.017 [2024-11-28 05:19:10.165032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.017 [2024-11-28 05:19:10.165050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.171027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.017 [2024-11-28 05:19:10.171131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:41.017 [2024-11-28 05:19:10.171171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.017 [2024-11-28 05:19:10.171199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.171242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.017 [2024-11-28 05:19:10.171327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:41.017 [2024-11-28 05:19:10.171345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.017 [2024-11-28 05:19:10.171359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.171417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.017 [2024-11-28 05:19:10.171443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:41.017 [2024-11-28 05:19:10.171524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.017 [2024-11-28 05:19:10.171541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.171609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.017 [2024-11-28 05:19:10.171632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:41.017 [2024-11-28 05:19:10.171717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.017 [2024-11-28 05:19:10.171734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.171771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.017 [2024-11-28 05:19:10.171789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:41.017 [2024-11-28 05:19:10.171807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.017 [2024-11-28 05:19:10.171822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.171925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.017 [2024-11-28 05:19:10.171947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:41.017 [2024-11-28 05:19:10.171963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.017 [2024-11-28 05:19:10.171977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.172021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.017 [2024-11-28 05:19:10.172094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:41.017 [2024-11-28 05:19:10.172112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.017 [2024-11-28 05:19:10.172127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.017 [2024-11-28 05:19:10.172247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 33.702 ms, result 0 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:41.278 Remove shared memory files 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94252 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:41.278 ************************************ 00:29:41.278 END TEST ftl_upgrade_shutdown 00:29:41.278 ************************************ 00:29:41.278 00:29:41.278 real 1m10.512s 00:29:41.278 user 1m34.387s 00:29:41.278 sys 0m20.395s 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.278 05:19:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:41.278 05:19:10 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:41.279 05:19:10 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:41.279 05:19:10 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:29:41.279 05:19:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.279 05:19:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:41.279 ************************************ 00:29:41.279 START TEST ftl_restore_fast 00:29:41.279 ************************************ 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:41.279 * Looking for test storage... 00:29:41.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:41.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.279 --rc genhtml_branch_coverage=1 00:29:41.279 --rc genhtml_function_coverage=1 00:29:41.279 --rc genhtml_legend=1 00:29:41.279 --rc geninfo_all_blocks=1 00:29:41.279 --rc geninfo_unexecuted_blocks=1 00:29:41.279 00:29:41.279 ' 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:41.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.279 --rc genhtml_branch_coverage=1 00:29:41.279 --rc genhtml_function_coverage=1 00:29:41.279 --rc genhtml_legend=1 00:29:41.279 --rc geninfo_all_blocks=1 00:29:41.279 --rc geninfo_unexecuted_blocks=1 00:29:41.279 00:29:41.279 ' 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:41.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.279 --rc genhtml_branch_coverage=1 00:29:41.279 --rc genhtml_function_coverage=1 00:29:41.279 --rc genhtml_legend=1 00:29:41.279 --rc geninfo_all_blocks=1 00:29:41.279 --rc geninfo_unexecuted_blocks=1 00:29:41.279 00:29:41.279 ' 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:41.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.279 --rc genhtml_branch_coverage=1 00:29:41.279 --rc genhtml_function_coverage=1 00:29:41.279 --rc genhtml_legend=1 00:29:41.279 --rc geninfo_all_blocks=1 00:29:41.279 --rc geninfo_unexecuted_blocks=1 00:29:41.279 00:29:41.279 ' 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:41.279 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.7MavMQxaWZ 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:41.539 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:41.540 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:41.540 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94680 00:29:41.540 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94680 00:29:41.540 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 94680 ']' 00:29:41.540 05:19:10 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:41.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.540 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.540 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.540 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.540 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.540 05:19:10 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:41.540 [2024-11-28 05:19:10.659672] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:41.540 [2024-11-28 05:19:10.660077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94680 ] 00:29:41.540 [2024-11-28 05:19:10.809891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.812 [2024-11-28 05:19:10.833064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.430 05:19:11 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.430 05:19:11 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:29:42.430 05:19:11 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:42.430 05:19:11 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:42.430 05:19:11 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:42.430 05:19:11 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:42.430 05:19:11 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:42.430 05:19:11 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:42.690 05:19:11 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:42.690 05:19:11 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:42.690 05:19:11 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:42.690 05:19:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:42.690 05:19:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:42.690 05:19:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:42.690 05:19:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:42.690 05:19:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:42.952 05:19:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:42.952 { 00:29:42.952 "name": "nvme0n1", 00:29:42.952 "aliases": [ 00:29:42.952 "de6bb511-c914-460c-8672-bd46d228f74a" 00:29:42.952 ], 00:29:42.952 "product_name": "NVMe disk", 00:29:42.952 "block_size": 4096, 00:29:42.952 "num_blocks": 1310720, 00:29:42.952 "uuid": "de6bb511-c914-460c-8672-bd46d228f74a", 00:29:42.952 "numa_id": -1, 00:29:42.952 "assigned_rate_limits": { 00:29:42.952 "rw_ios_per_sec": 0, 00:29:42.952 "rw_mbytes_per_sec": 0, 00:29:42.952 "r_mbytes_per_sec": 0, 00:29:42.952 "w_mbytes_per_sec": 0 00:29:42.952 }, 00:29:42.952 "claimed": true, 00:29:42.952 "claim_type": "read_many_write_one", 00:29:42.952 "zoned": false, 00:29:42.952 "supported_io_types": { 00:29:42.952 "read": true, 00:29:42.952 "write": true, 00:29:42.952 "unmap": true, 00:29:42.952 "flush": true, 00:29:42.952 "reset": true, 00:29:42.952 "nvme_admin": true, 00:29:42.952 "nvme_io": true, 00:29:42.952 "nvme_io_md": false, 00:29:42.952 "write_zeroes": true, 00:29:42.952 "zcopy": false, 00:29:42.952 "get_zone_info": false, 00:29:42.952 "zone_management": false, 00:29:42.952 "zone_append": false, 00:29:42.952 "compare": true, 00:29:42.952 "compare_and_write": false, 00:29:42.952 "abort": true, 00:29:42.952 "seek_hole": false, 00:29:42.952 "seek_data": false, 00:29:42.952 "copy": true, 00:29:42.952 "nvme_iov_md": false 00:29:42.952 }, 00:29:42.952 "driver_specific": { 00:29:42.952 "nvme": [ 00:29:42.952 { 00:29:42.952 "pci_address": "0000:00:11.0", 00:29:42.952 "trid": { 00:29:42.952 "trtype": "PCIe", 00:29:42.952 "traddr": "0000:00:11.0" 00:29:42.952 }, 00:29:42.952 "ctrlr_data": { 00:29:42.952 "cntlid": 0, 00:29:42.952 "vendor_id": "0x1b36", 00:29:42.952 "model_number": "QEMU NVMe Ctrl", 00:29:42.952 "serial_number": "12341", 00:29:42.952 "firmware_revision": "8.0.0", 00:29:42.952 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:42.952 "oacs": { 00:29:42.952 "security": 0, 00:29:42.952 "format": 1, 00:29:42.952 "firmware": 0, 00:29:42.952 "ns_manage": 1 00:29:42.952 }, 00:29:42.952 "multi_ctrlr": false, 00:29:42.952 "ana_reporting": false 00:29:42.952 }, 00:29:42.952 "vs": { 00:29:42.952 "nvme_version": "1.4" 00:29:42.952 }, 00:29:42.952 "ns_data": { 00:29:42.952 "id": 1, 00:29:42.952 "can_share": false 00:29:42.952 } 00:29:42.952 } 00:29:42.952 ], 00:29:42.952 "mp_policy": "active_passive" 00:29:42.952 } 00:29:42.952 } 00:29:42.952 ]' 00:29:42.952 05:19:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:42.952 05:19:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:42.952 05:19:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:42.952 05:19:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:42.952 05:19:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:42.952 05:19:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:29:42.952 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:42.952 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:42.952 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:42.952 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:42.952 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:43.213 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=3d3c9b80-d007-453c-9fe0-f89de1e5e527 00:29:43.213 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:43.213 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3d3c9b80-d007-453c-9fe0-f89de1e5e527 00:29:43.474 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=ef1056a9-a86d-41cc-b430-224e40720078 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ef1056a9-a86d-41cc-b430-224e40720078 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=411db2a8-4ff6-4e5f-abd7-e39e0e6104ce 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 411db2a8-4ff6-4e5f-abd7-e39e0e6104ce 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=411db2a8-4ff6-4e5f-abd7-e39e0e6104ce 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 411db2a8-4ff6-4e5f-abd7-e39e0e6104ce 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=411db2a8-4ff6-4e5f-abd7-e39e0e6104ce 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:43.733 05:19:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 411db2a8-4ff6-4e5f-abd7-e39e0e6104ce 00:29:43.991 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:43.991 { 00:29:43.991 "name": "411db2a8-4ff6-4e5f-abd7-e39e0e6104ce", 00:29:43.991 "aliases": [ 00:29:43.991 "lvs/nvme0n1p0" 00:29:43.991 ], 00:29:43.991 "product_name": "Logical Volume", 00:29:43.991 "block_size": 4096, 00:29:43.991 "num_blocks": 26476544, 00:29:43.991 "uuid": "411db2a8-4ff6-4e5f-abd7-e39e0e6104ce", 00:29:43.991 "assigned_rate_limits": { 00:29:43.991 "rw_ios_per_sec": 0, 00:29:43.991 "rw_mbytes_per_sec": 0, 00:29:43.991 "r_mbytes_per_sec": 0, 00:29:43.991 "w_mbytes_per_sec": 0 00:29:43.991 }, 00:29:43.991 "claimed": false, 00:29:43.991 "zoned": false, 00:29:43.991 "supported_io_types": { 00:29:43.991 "read": true, 00:29:43.991 "write": true, 00:29:43.991 "unmap": true, 00:29:43.991 "flush": false, 00:29:43.991 "reset": true, 00:29:43.991 "nvme_admin": false, 00:29:43.991 "nvme_io": false, 00:29:43.991 "nvme_io_md": false, 00:29:43.991 "write_zeroes": true, 00:29:43.991 "zcopy": false, 00:29:43.991 "get_zone_info": false, 00:29:43.991 "zone_management": false, 00:29:43.991 "zone_append": false, 00:29:43.991 "compare": false, 00:29:43.991 "compare_and_write": false, 00:29:43.991 "abort": false, 00:29:43.991 "seek_hole": true, 00:29:43.991 "seek_data": true, 00:29:43.991 "copy": false, 00:29:43.991 "nvme_iov_md": false 00:29:43.991 }, 00:29:43.991 "driver_specific": { 00:29:43.991 "lvol": { 00:29:43.991 "lvol_store_uuid": "ef1056a9-a86d-41cc-b430-224e40720078", 00:29:43.991 "base_bdev": "nvme0n1", 00:29:43.991 "thin_provision": true, 00:29:43.991 "num_allocated_clusters": 0, 00:29:43.991 "snapshot": false, 00:29:43.991 "clone": false, 00:29:43.991 "esnap_clone": false 00:29:43.991 } 00:29:43.991 } 00:29:43.991 } 00:29:43.991 ]' 00:29:43.991 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:43.991 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:43.991 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:43.991 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:43.991 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:43.991 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:43.991 05:19:13 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:43.991 05:19:13 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:43.991 05:19:13 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:44.249 05:19:13 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:44.249 05:19:13 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:44.249 05:19:13 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 411db2a8-4ff6-4e5f-abd7-e39e0e6104ce 00:29:44.249 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=411db2a8-4ff6-4e5f-abd7-e39e0e6104ce 00:29:44.249 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:44.249 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:44.249 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:44.249 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 411db2a8-4ff6-4e5f-abd7-e39e0e6104ce 00:29:44.507 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:44.507 { 00:29:44.507 "name": "411db2a8-4ff6-4e5f-abd7-e39e0e6104ce", 00:29:44.507 "aliases": [ 00:29:44.507 "lvs/nvme0n1p0" 00:29:44.507 ], 00:29:44.507 "product_name": "Logical Volume", 00:29:44.507 "block_size": 4096, 00:29:44.507 "num_blocks": 26476544, 00:29:44.507 "uuid": "411db2a8-4ff6-4e5f-abd7-e39e0e6104ce", 00:29:44.507 "assigned_rate_limits": { 00:29:44.507 "rw_ios_per_sec": 0, 00:29:44.507 "rw_mbytes_per_sec": 0, 00:29:44.507 "r_mbytes_per_sec": 0, 00:29:44.507 "w_mbytes_per_sec": 0 00:29:44.507 }, 00:29:44.507 "claimed": false, 00:29:44.507 "zoned": false, 00:29:44.507 "supported_io_types": { 00:29:44.507 "read": true, 00:29:44.507 "write": true, 00:29:44.507 "unmap": true, 00:29:44.507 "flush": false, 00:29:44.507 "reset": true, 00:29:44.507 "nvme_admin": false, 00:29:44.507 "nvme_io": false, 00:29:44.507 "nvme_io_md": false, 00:29:44.507 "write_zeroes": true, 00:29:44.507 "zcopy": false, 00:29:44.508 "get_zone_info": false, 00:29:44.508 "zone_management": false, 00:29:44.508 "zone_append": false, 00:29:44.508 "compare": false, 00:29:44.508 "compare_and_write": false, 00:29:44.508 "abort": false, 00:29:44.508 "seek_hole": true, 00:29:44.508 "seek_data": true, 00:29:44.508 "copy": false, 00:29:44.508 "nvme_iov_md": false 00:29:44.508 }, 00:29:44.508 "driver_specific": { 00:29:44.508 "lvol": { 00:29:44.508 "lvol_store_uuid": "ef1056a9-a86d-41cc-b430-224e40720078", 00:29:44.508 "base_bdev": "nvme0n1", 00:29:44.508 "thin_provision": true, 00:29:44.508 "num_allocated_clusters": 0, 00:29:44.508 "snapshot": false, 00:29:44.508 "clone": false, 00:29:44.508 "esnap_clone": false 00:29:44.508 } 00:29:44.508 } 00:29:44.508 } 00:29:44.508 ]' 00:29:44.508 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:44.508 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:44.508 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:44.508 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:44.508 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:44.508 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:44.508 05:19:13 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:44.508 05:19:13 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:44.767 05:19:13 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:44.767 05:19:13 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 411db2a8-4ff6-4e5f-abd7-e39e0e6104ce 00:29:44.767 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=411db2a8-4ff6-4e5f-abd7-e39e0e6104ce 00:29:44.767 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:44.767 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:44.767 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:44.767 05:19:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 411db2a8-4ff6-4e5f-abd7-e39e0e6104ce 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:45.027 { 00:29:45.027 "name": "411db2a8-4ff6-4e5f-abd7-e39e0e6104ce", 00:29:45.027 "aliases": [ 00:29:45.027 "lvs/nvme0n1p0" 00:29:45.027 ], 00:29:45.027 "product_name": "Logical Volume", 00:29:45.027 "block_size": 4096, 00:29:45.027 "num_blocks": 26476544, 00:29:45.027 "uuid": "411db2a8-4ff6-4e5f-abd7-e39e0e6104ce", 00:29:45.027 "assigned_rate_limits": { 00:29:45.027 "rw_ios_per_sec": 0, 00:29:45.027 "rw_mbytes_per_sec": 0, 00:29:45.027 "r_mbytes_per_sec": 0, 00:29:45.027 "w_mbytes_per_sec": 0 00:29:45.027 }, 00:29:45.027 "claimed": false, 00:29:45.027 "zoned": false, 00:29:45.027 "supported_io_types": { 00:29:45.027 "read": true, 00:29:45.027 "write": true, 00:29:45.027 "unmap": true, 00:29:45.027 "flush": false, 00:29:45.027 "reset": true, 00:29:45.027 "nvme_admin": false, 00:29:45.027 "nvme_io": false, 00:29:45.027 "nvme_io_md": false, 00:29:45.027 "write_zeroes": true, 00:29:45.027 "zcopy": false, 00:29:45.027 "get_zone_info": false, 00:29:45.027 "zone_management": false, 00:29:45.027 "zone_append": false, 00:29:45.027 "compare": false, 00:29:45.027 "compare_and_write": false, 00:29:45.027 "abort": false, 00:29:45.027 "seek_hole": true, 00:29:45.027 "seek_data": true, 00:29:45.027 "copy": false, 00:29:45.027 "nvme_iov_md": false 00:29:45.027 }, 00:29:45.027 "driver_specific": { 00:29:45.027 "lvol": { 00:29:45.027 "lvol_store_uuid": "ef1056a9-a86d-41cc-b430-224e40720078", 00:29:45.027 "base_bdev": "nvme0n1", 00:29:45.027 "thin_provision": true, 00:29:45.027 "num_allocated_clusters": 0, 00:29:45.027 "snapshot": false, 00:29:45.027 "clone": false, 00:29:45.027 "esnap_clone": false 00:29:45.027 } 00:29:45.027 } 00:29:45.027 } 00:29:45.027 ]' 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 411db2a8-4ff6-4e5f-abd7-e39e0e6104ce --l2p_dram_limit 10' 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:45.027 05:19:14 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 411db2a8-4ff6-4e5f-abd7-e39e0e6104ce --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:45.289 [2024-11-28 05:19:14.432245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.289 [2024-11-28 05:19:14.432321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:45.289 [2024-11-28 05:19:14.432338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:45.289 [2024-11-28 05:19:14.432351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.289 [2024-11-28 05:19:14.432428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.289 [2024-11-28 05:19:14.432446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:45.289 [2024-11-28 05:19:14.432455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:45.289 [2024-11-28 05:19:14.432469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.289 [2024-11-28 05:19:14.432492] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:45.289 [2024-11-28 05:19:14.432800] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:45.289 [2024-11-28 05:19:14.432829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.289 [2024-11-28 05:19:14.432847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:45.289 [2024-11-28 05:19:14.432856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:29:45.289 [2024-11-28 05:19:14.432867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.289 [2024-11-28 05:19:14.432946] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bac083d5-348e-4fb4-bb51-a59d48b2a450 00:29:45.289 [2024-11-28 05:19:14.434774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.289 [2024-11-28 05:19:14.434959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:45.289 [2024-11-28 05:19:14.434983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:45.289 [2024-11-28 05:19:14.434992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.289 [2024-11-28 05:19:14.443932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.289 [2024-11-28 05:19:14.443984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:45.289 [2024-11-28 05:19:14.443998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.872 ms 00:29:45.289 [2024-11-28 05:19:14.444007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.289 [2024-11-28 05:19:14.444106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.289 [2024-11-28 05:19:14.444118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:45.289 [2024-11-28 05:19:14.444133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:29:45.289 [2024-11-28 05:19:14.444141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.289 [2024-11-28 05:19:14.444233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.289 [2024-11-28 05:19:14.444244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:45.289 [2024-11-28 05:19:14.444256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:45.289 [2024-11-28 05:19:14.444267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.289 [2024-11-28 05:19:14.444295] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:45.289 [2024-11-28 05:19:14.446577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.289 [2024-11-28 05:19:14.446625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:45.289 [2024-11-28 05:19:14.446637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.292 ms 00:29:45.289 [2024-11-28 05:19:14.446648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.289 [2024-11-28 05:19:14.446695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.289 [2024-11-28 05:19:14.446707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:45.289 [2024-11-28 05:19:14.446717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:45.289 [2024-11-28 05:19:14.446731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.289 [2024-11-28 05:19:14.446762] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:45.289 [2024-11-28 05:19:14.446915] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:45.289 [2024-11-28 05:19:14.446930] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:45.289 [2024-11-28 05:19:14.446946] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:45.289 [2024-11-28 05:19:14.446958] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:45.289 [2024-11-28 05:19:14.446973] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:45.289 [2024-11-28 05:19:14.446983] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:45.289 [2024-11-28 05:19:14.446997] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:45.289 [2024-11-28 05:19:14.447008] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:45.289 [2024-11-28 05:19:14.447018] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:45.290 [2024-11-28 05:19:14.447026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.290 [2024-11-28 05:19:14.447036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:45.290 [2024-11-28 05:19:14.447045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:29:45.290 [2024-11-28 05:19:14.447055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.290 [2024-11-28 05:19:14.447140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.290 [2024-11-28 05:19:14.447154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:45.290 [2024-11-28 05:19:14.447165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:45.290 [2024-11-28 05:19:14.447177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.290 [2024-11-28 05:19:14.447293] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:45.290 [2024-11-28 05:19:14.447306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:45.290 [2024-11-28 05:19:14.447315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:45.290 [2024-11-28 05:19:14.447325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:45.290 [2024-11-28 05:19:14.447346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:45.290 [2024-11-28 05:19:14.447364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:45.290 [2024-11-28 05:19:14.447371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:45.290 [2024-11-28 05:19:14.447387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:45.290 [2024-11-28 05:19:14.447397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:45.290 [2024-11-28 05:19:14.447403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:45.290 [2024-11-28 05:19:14.447415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:45.290 [2024-11-28 05:19:14.447422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:45.290 [2024-11-28 05:19:14.447433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:45.290 [2024-11-28 05:19:14.447448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:45.290 [2024-11-28 05:19:14.447454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:45.290 [2024-11-28 05:19:14.447470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:45.290 [2024-11-28 05:19:14.447486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:45.290 [2024-11-28 05:19:14.447495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:45.290 [2024-11-28 05:19:14.447515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:45.290 [2024-11-28 05:19:14.447522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:45.290 [2024-11-28 05:19:14.447538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:45.290 [2024-11-28 05:19:14.447549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:45.290 [2024-11-28 05:19:14.447565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:45.290 [2024-11-28 05:19:14.447571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:45.290 [2024-11-28 05:19:14.447588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:45.290 [2024-11-28 05:19:14.447597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:45.290 [2024-11-28 05:19:14.447604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:45.290 [2024-11-28 05:19:14.447613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:45.290 [2024-11-28 05:19:14.447620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:45.290 [2024-11-28 05:19:14.447629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:45.290 [2024-11-28 05:19:14.447645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:45.290 [2024-11-28 05:19:14.447651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447659] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:45.290 [2024-11-28 05:19:14.447675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:45.290 [2024-11-28 05:19:14.447687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:45.290 [2024-11-28 05:19:14.447694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:45.290 [2024-11-28 05:19:14.447708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:45.290 [2024-11-28 05:19:14.447714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:45.290 [2024-11-28 05:19:14.447723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:45.290 [2024-11-28 05:19:14.447729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:45.290 [2024-11-28 05:19:14.447739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:45.290 [2024-11-28 05:19:14.447745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:45.290 [2024-11-28 05:19:14.447759] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:45.290 [2024-11-28 05:19:14.447768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:45.290 [2024-11-28 05:19:14.447781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:45.290 [2024-11-28 05:19:14.447789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:45.290 [2024-11-28 05:19:14.447800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:45.290 [2024-11-28 05:19:14.447808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:45.290 [2024-11-28 05:19:14.447817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:45.290 [2024-11-28 05:19:14.447824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:45.290 [2024-11-28 05:19:14.447836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:45.290 [2024-11-28 05:19:14.447844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:45.290 [2024-11-28 05:19:14.447853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:45.290 [2024-11-28 05:19:14.447860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:45.290 [2024-11-28 05:19:14.447870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:45.290 [2024-11-28 05:19:14.447878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:45.290 [2024-11-28 05:19:14.447887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:45.290 [2024-11-28 05:19:14.447894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:45.290 [2024-11-28 05:19:14.447904] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:45.290 [2024-11-28 05:19:14.447912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:45.290 [2024-11-28 05:19:14.447923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:45.290 [2024-11-28 05:19:14.447930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:45.290 [2024-11-28 05:19:14.447940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:45.290 [2024-11-28 05:19:14.447948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:45.290 [2024-11-28 05:19:14.447958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:45.290 [2024-11-28 05:19:14.447965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:45.290 [2024-11-28 05:19:14.447978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:29:45.290 [2024-11-28 05:19:14.447986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.290 [2024-11-28 05:19:14.448028] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:45.290 [2024-11-28 05:19:14.448038] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:49.495 [2024-11-28 05:19:18.081064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.495 [2024-11-28 05:19:18.081129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:49.495 [2024-11-28 05:19:18.081145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3633.022 ms 00:29:49.495 [2024-11-28 05:19:18.081154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.495 [2024-11-28 05:19:18.089688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.495 [2024-11-28 05:19:18.089728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:49.495 [2024-11-28 05:19:18.089743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.437 ms 00:29:49.495 [2024-11-28 05:19:18.089751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.495 [2024-11-28 05:19:18.089849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.495 [2024-11-28 05:19:18.089857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:49.495 [2024-11-28 05:19:18.089868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:49.495 [2024-11-28 05:19:18.089875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.495 [2024-11-28 05:19:18.098955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.495 [2024-11-28 05:19:18.098997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:49.495 [2024-11-28 05:19:18.099009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.034 ms 00:29:49.495 [2024-11-28 05:19:18.099020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.495 [2024-11-28 05:19:18.099047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.495 [2024-11-28 05:19:18.099055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:49.495 [2024-11-28 05:19:18.099064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:49.495 [2024-11-28 05:19:18.099072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.495 [2024-11-28 05:19:18.099440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.495 [2024-11-28 05:19:18.099456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:49.495 [2024-11-28 05:19:18.099467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:29:49.495 [2024-11-28 05:19:18.099475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.495 [2024-11-28 05:19:18.099589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.495 [2024-11-28 05:19:18.099598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:49.495 [2024-11-28 05:19:18.099609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:29:49.495 [2024-11-28 05:19:18.099617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.495 [2024-11-28 05:19:18.105330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.495 [2024-11-28 05:19:18.105470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:49.495 [2024-11-28 05:19:18.105490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.691 ms 00:29:49.495 [2024-11-28 05:19:18.105497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.495 [2024-11-28 05:19:18.122482] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:49.495 [2024-11-28 05:19:18.125729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.125770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:49.496 [2024-11-28 05:19:18.125785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.142 ms 00:29:49.496 [2024-11-28 05:19:18.125798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.197946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.198004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:49.496 [2024-11-28 05:19:18.198017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.107 ms 00:29:49.496 [2024-11-28 05:19:18.198034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.198237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.198252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:49.496 [2024-11-28 05:19:18.198261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:29:49.496 [2024-11-28 05:19:18.198271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.202980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.203024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:49.496 [2024-11-28 05:19:18.203037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:29:49.496 [2024-11-28 05:19:18.203048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.206838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.206880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:49.496 [2024-11-28 05:19:18.206890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.753 ms 00:29:49.496 [2024-11-28 05:19:18.206899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.207225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.207243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:49.496 [2024-11-28 05:19:18.207252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:29:49.496 [2024-11-28 05:19:18.207263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.245641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.245688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:49.496 [2024-11-28 05:19:18.245702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.358 ms 00:29:49.496 [2024-11-28 05:19:18.245712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.251302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.251456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:49.496 [2024-11-28 05:19:18.251474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.541 ms 00:29:49.496 [2024-11-28 05:19:18.251484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.255924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.255965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:49.496 [2024-11-28 05:19:18.255974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.405 ms 00:29:49.496 [2024-11-28 05:19:18.255983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.260991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.261035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:49.496 [2024-11-28 05:19:18.261045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.972 ms 00:29:49.496 [2024-11-28 05:19:18.261056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.261097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.261109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:49.496 [2024-11-28 05:19:18.261118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:49.496 [2024-11-28 05:19:18.261127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.261212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.261224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:49.496 [2024-11-28 05:19:18.261233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:49.496 [2024-11-28 05:19:18.261245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.262201] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3829.528 ms, result 0 00:29:49.496 { 00:29:49.496 "name": "ftl0", 00:29:49.496 "uuid": "bac083d5-348e-4fb4-bb51-a59d48b2a450" 00:29:49.496 } 00:29:49.496 05:19:18 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:49.496 05:19:18 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:49.496 05:19:18 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:49.496 05:19:18 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:49.496 [2024-11-28 05:19:18.706222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.706271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:49.496 [2024-11-28 05:19:18.706290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:49.496 [2024-11-28 05:19:18.706299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.706324] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:49.496 [2024-11-28 05:19:18.706908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.706939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:49.496 [2024-11-28 05:19:18.706949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:29:49.496 [2024-11-28 05:19:18.706958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.707233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.707251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:49.496 [2024-11-28 05:19:18.707263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:29:49.496 [2024-11-28 05:19:18.707276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.710527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.710550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:49.496 [2024-11-28 05:19:18.710560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:29:49.496 [2024-11-28 05:19:18.710570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.716781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.716816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:49.496 [2024-11-28 05:19:18.716827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.193 ms 00:29:49.496 [2024-11-28 05:19:18.716839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.719637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.719685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:49.496 [2024-11-28 05:19:18.719695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:29:49.496 [2024-11-28 05:19:18.719704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.725789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.725955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:49.496 [2024-11-28 05:19:18.725973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.061 ms 00:29:49.496 [2024-11-28 05:19:18.725984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.496 [2024-11-28 05:19:18.726141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.496 [2024-11-28 05:19:18.726156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:49.496 [2024-11-28 05:19:18.726165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:29:49.497 [2024-11-28 05:19:18.726175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.497 [2024-11-28 05:19:18.729236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.497 [2024-11-28 05:19:18.729277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:49.497 [2024-11-28 05:19:18.729287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.024 ms 00:29:49.497 [2024-11-28 05:19:18.729299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.497 [2024-11-28 05:19:18.731498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.497 [2024-11-28 05:19:18.731542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:49.497 [2024-11-28 05:19:18.731551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:29:49.497 [2024-11-28 05:19:18.731560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.497 [2024-11-28 05:19:18.733700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.497 [2024-11-28 05:19:18.733745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:49.497 [2024-11-28 05:19:18.733753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.110 ms 00:29:49.497 [2024-11-28 05:19:18.733763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.497 [2024-11-28 05:19:18.735669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.497 [2024-11-28 05:19:18.735713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:49.497 [2024-11-28 05:19:18.735722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.859 ms 00:29:49.497 [2024-11-28 05:19:18.735731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.497 [2024-11-28 05:19:18.735750] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:49.497 [2024-11-28 05:19:18.735766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.735993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:49.497 [2024-11-28 05:19:18.736262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:49.498 [2024-11-28 05:19:18.736676] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:49.498 [2024-11-28 05:19:18.736684] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bac083d5-348e-4fb4-bb51-a59d48b2a450 00:29:49.498 [2024-11-28 05:19:18.736694] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:49.498 [2024-11-28 05:19:18.736701] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:49.498 [2024-11-28 05:19:18.736710] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:49.498 [2024-11-28 05:19:18.736718] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:49.498 [2024-11-28 05:19:18.736729] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:49.498 [2024-11-28 05:19:18.736737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:49.498 [2024-11-28 05:19:18.736746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:49.498 [2024-11-28 05:19:18.736752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:49.498 [2024-11-28 05:19:18.736760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:49.498 [2024-11-28 05:19:18.736767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.498 [2024-11-28 05:19:18.736778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:49.498 [2024-11-28 05:19:18.736786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:29:49.498 [2024-11-28 05:19:18.736796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.498 [2024-11-28 05:19:18.738990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.498 [2024-11-28 05:19:18.739111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:49.498 [2024-11-28 05:19:18.739173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:29:49.498 [2024-11-28 05:19:18.739214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.498 [2024-11-28 05:19:18.739345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.498 [2024-11-28 05:19:18.739448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:49.498 [2024-11-28 05:19:18.739509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:49.498 [2024-11-28 05:19:18.739533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.498 [2024-11-28 05:19:18.746370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.498 [2024-11-28 05:19:18.746505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:49.498 [2024-11-28 05:19:18.746587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.498 [2024-11-28 05:19:18.746615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.499 [2024-11-28 05:19:18.746690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.499 [2024-11-28 05:19:18.746718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:49.499 [2024-11-28 05:19:18.746738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.499 [2024-11-28 05:19:18.746758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.499 [2024-11-28 05:19:18.746852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.499 [2024-11-28 05:19:18.746902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:49.499 [2024-11-28 05:19:18.746922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.499 [2024-11-28 05:19:18.746947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.499 [2024-11-28 05:19:18.746977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.499 [2024-11-28 05:19:18.746999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:49.499 [2024-11-28 05:19:18.747018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.499 [2024-11-28 05:19:18.747082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.499 [2024-11-28 05:19:18.760255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.499 [2024-11-28 05:19:18.760500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:49.499 [2024-11-28 05:19:18.760561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.499 [2024-11-28 05:19:18.760592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.499 [2024-11-28 05:19:18.771360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.499 [2024-11-28 05:19:18.771544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:49.499 [2024-11-28 05:19:18.771600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.499 [2024-11-28 05:19:18.771626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.499 [2024-11-28 05:19:18.771717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.499 [2024-11-28 05:19:18.771747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:49.499 [2024-11-28 05:19:18.771768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.499 [2024-11-28 05:19:18.771789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.499 [2024-11-28 05:19:18.771853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.499 [2024-11-28 05:19:18.771880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:49.499 [2024-11-28 05:19:18.771949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.499 [2024-11-28 05:19:18.771976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.499 [2024-11-28 05:19:18.772077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.499 [2024-11-28 05:19:18.772113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:49.499 [2024-11-28 05:19:18.772134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.499 [2024-11-28 05:19:18.772155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.499 [2024-11-28 05:19:18.772221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.499 [2024-11-28 05:19:18.772248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:49.499 [2024-11-28 05:19:18.772271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.499 [2024-11-28 05:19:18.772283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.499 [2024-11-28 05:19:18.772326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.499 [2024-11-28 05:19:18.772341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:49.499 [2024-11-28 05:19:18.772350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.499 [2024-11-28 05:19:18.772361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.499 [2024-11-28 05:19:18.772412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:49.499 [2024-11-28 05:19:18.772424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:49.499 [2024-11-28 05:19:18.772432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:49.499 [2024-11-28 05:19:18.772442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.499 [2024-11-28 05:19:18.772587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.322 ms, result 0 00:29:49.760 true 00:29:49.760 05:19:18 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94680 00:29:49.760 05:19:18 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94680 ']' 00:29:49.760 05:19:18 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94680 00:29:49.760 05:19:18 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:29:49.760 05:19:18 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.760 05:19:18 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94680 00:29:49.760 killing process with pid 94680 00:29:49.760 05:19:18 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:49.760 05:19:18 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:49.760 05:19:18 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94680' 00:29:49.760 05:19:18 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 94680 00:29:49.761 05:19:18 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 94680 00:29:55.045 05:19:23 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:58.343 262144+0 records in 00:29:58.343 262144+0 records out 00:29:58.343 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.87075 s, 277 MB/s 00:29:58.343 05:19:27 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:00.255 05:19:29 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:00.255 [2024-11-28 05:19:29.112159] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:00.255 [2024-11-28 05:19:29.112840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94888 ] 00:30:00.255 [2024-11-28 05:19:29.257245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.255 [2024-11-28 05:19:29.283979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.255 [2024-11-28 05:19:29.394749] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:00.255 [2024-11-28 05:19:29.394822] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:00.517 [2024-11-28 05:19:29.554657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.517 [2024-11-28 05:19:29.554721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:00.517 [2024-11-28 05:19:29.554737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:00.517 [2024-11-28 05:19:29.554746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.517 [2024-11-28 05:19:29.554804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.517 [2024-11-28 05:19:29.554820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:00.517 [2024-11-28 05:19:29.554829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:00.517 [2024-11-28 05:19:29.554843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.517 [2024-11-28 05:19:29.554873] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:00.517 [2024-11-28 05:19:29.555154] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:00.517 [2024-11-28 05:19:29.555199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.517 [2024-11-28 05:19:29.555213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:00.517 [2024-11-28 05:19:29.555228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:30:00.517 [2024-11-28 05:19:29.555237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.517 [2024-11-28 05:19:29.556974] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:00.517 [2024-11-28 05:19:29.561002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.517 [2024-11-28 05:19:29.561057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:00.517 [2024-11-28 05:19:29.561069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.031 ms 00:30:00.517 [2024-11-28 05:19:29.561086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.517 [2024-11-28 05:19:29.561168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.517 [2024-11-28 05:19:29.561205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:00.517 [2024-11-28 05:19:29.561216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:30:00.517 [2024-11-28 05:19:29.561223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.517 [2024-11-28 05:19:29.569729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.517 [2024-11-28 05:19:29.569775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:00.517 [2024-11-28 05:19:29.569799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.458 ms 00:30:00.517 [2024-11-28 05:19:29.569807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.517 [2024-11-28 05:19:29.569905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.517 [2024-11-28 05:19:29.569915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:00.517 [2024-11-28 05:19:29.569927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:30:00.517 [2024-11-28 05:19:29.569937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.517 [2024-11-28 05:19:29.570000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.517 [2024-11-28 05:19:29.570016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:00.517 [2024-11-28 05:19:29.570025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:00.517 [2024-11-28 05:19:29.570036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.517 [2024-11-28 05:19:29.570058] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:00.517 [2024-11-28 05:19:29.572150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.517 [2024-11-28 05:19:29.572216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:00.517 [2024-11-28 05:19:29.572226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.098 ms 00:30:00.517 [2024-11-28 05:19:29.572234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.517 [2024-11-28 05:19:29.572269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.518 [2024-11-28 05:19:29.572278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:00.518 [2024-11-28 05:19:29.572287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:00.518 [2024-11-28 05:19:29.572299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.518 [2024-11-28 05:19:29.572324] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:00.518 [2024-11-28 05:19:29.572346] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:00.518 [2024-11-28 05:19:29.572393] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:00.518 [2024-11-28 05:19:29.572409] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:00.518 [2024-11-28 05:19:29.572515] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:00.518 [2024-11-28 05:19:29.572526] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:00.518 [2024-11-28 05:19:29.572540] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:00.518 [2024-11-28 05:19:29.572552] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:00.518 [2024-11-28 05:19:29.572560] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:00.518 [2024-11-28 05:19:29.572569] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:00.518 [2024-11-28 05:19:29.572580] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:00.518 [2024-11-28 05:19:29.572588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:00.518 [2024-11-28 05:19:29.572595] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:00.518 [2024-11-28 05:19:29.572605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.518 [2024-11-28 05:19:29.572612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:00.518 [2024-11-28 05:19:29.572620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:30:00.518 [2024-11-28 05:19:29.572633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.518 [2024-11-28 05:19:29.572718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.518 [2024-11-28 05:19:29.572731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:00.518 [2024-11-28 05:19:29.572738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:00.518 [2024-11-28 05:19:29.572749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.518 [2024-11-28 05:19:29.572851] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:00.518 [2024-11-28 05:19:29.572863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:00.518 [2024-11-28 05:19:29.572873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:00.518 [2024-11-28 05:19:29.572881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:00.518 [2024-11-28 05:19:29.572891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:00.518 [2024-11-28 05:19:29.572899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:00.518 [2024-11-28 05:19:29.572908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:00.518 [2024-11-28 05:19:29.572917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:00.518 [2024-11-28 05:19:29.572925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:00.518 [2024-11-28 05:19:29.572933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:00.518 [2024-11-28 05:19:29.572947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:00.518 [2024-11-28 05:19:29.572958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:00.518 [2024-11-28 05:19:29.572966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:00.518 [2024-11-28 05:19:29.572974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:00.518 [2024-11-28 05:19:29.572982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:00.518 [2024-11-28 05:19:29.572991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:00.518 [2024-11-28 05:19:29.572999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:00.518 [2024-11-28 05:19:29.573006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:00.518 [2024-11-28 05:19:29.573014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:00.518 [2024-11-28 05:19:29.573022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:00.518 [2024-11-28 05:19:29.573029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:00.518 [2024-11-28 05:19:29.573037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:00.518 [2024-11-28 05:19:29.573045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:00.518 [2024-11-28 05:19:29.573052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:00.518 [2024-11-28 05:19:29.573060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:00.518 [2024-11-28 05:19:29.573068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:00.518 [2024-11-28 05:19:29.573080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:00.518 [2024-11-28 05:19:29.573088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:00.518 [2024-11-28 05:19:29.573096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:00.518 [2024-11-28 05:19:29.573103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:00.518 [2024-11-28 05:19:29.573111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:00.518 [2024-11-28 05:19:29.573119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:00.518 [2024-11-28 05:19:29.573127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:00.518 [2024-11-28 05:19:29.573134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:00.518 [2024-11-28 05:19:29.573142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:00.518 [2024-11-28 05:19:29.573150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:00.518 [2024-11-28 05:19:29.573157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:00.518 [2024-11-28 05:19:29.573165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:00.518 [2024-11-28 05:19:29.573172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:00.518 [2024-11-28 05:19:29.573196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:00.518 [2024-11-28 05:19:29.573203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:00.518 [2024-11-28 05:19:29.573210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:00.518 [2024-11-28 05:19:29.573219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:00.518 [2024-11-28 05:19:29.573226] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:00.518 [2024-11-28 05:19:29.573241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:00.518 [2024-11-28 05:19:29.573251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:00.518 [2024-11-28 05:19:29.573259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:00.518 [2024-11-28 05:19:29.573267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:00.518 [2024-11-28 05:19:29.573274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:00.518 [2024-11-28 05:19:29.573281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:00.518 [2024-11-28 05:19:29.573288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:00.518 [2024-11-28 05:19:29.573295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:00.518 [2024-11-28 05:19:29.573301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:00.518 [2024-11-28 05:19:29.573310] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:00.518 [2024-11-28 05:19:29.573319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:00.518 [2024-11-28 05:19:29.573328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:00.518 [2024-11-28 05:19:29.573335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:00.518 [2024-11-28 05:19:29.573343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:00.518 [2024-11-28 05:19:29.573352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:00.518 [2024-11-28 05:19:29.573360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:00.518 [2024-11-28 05:19:29.573367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:00.518 [2024-11-28 05:19:29.573375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:00.518 [2024-11-28 05:19:29.573382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:00.518 [2024-11-28 05:19:29.573390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:00.518 [2024-11-28 05:19:29.573402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:00.518 [2024-11-28 05:19:29.573409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:00.518 [2024-11-28 05:19:29.573416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:00.518 [2024-11-28 05:19:29.573423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:00.518 [2024-11-28 05:19:29.573430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:00.518 [2024-11-28 05:19:29.573438] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:00.518 [2024-11-28 05:19:29.573446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:00.518 [2024-11-28 05:19:29.573454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:00.518 [2024-11-28 05:19:29.573462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:00.519 [2024-11-28 05:19:29.573471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:00.519 [2024-11-28 05:19:29.573481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:00.519 [2024-11-28 05:19:29.573489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.573496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:00.519 [2024-11-28 05:19:29.573504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:30:00.519 [2024-11-28 05:19:29.573528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.588172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.588235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:00.519 [2024-11-28 05:19:29.588248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.595 ms 00:30:00.519 [2024-11-28 05:19:29.588256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.588346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.588355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:00.519 [2024-11-28 05:19:29.588370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:00.519 [2024-11-28 05:19:29.588381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.608241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.608462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:00.519 [2024-11-28 05:19:29.608487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.799 ms 00:30:00.519 [2024-11-28 05:19:29.608499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.608557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.608570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:00.519 [2024-11-28 05:19:29.608581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:00.519 [2024-11-28 05:19:29.608598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.609166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.609234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:00.519 [2024-11-28 05:19:29.609248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:30:00.519 [2024-11-28 05:19:29.609259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.609444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.609457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:00.519 [2024-11-28 05:19:29.609468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:30:00.519 [2024-11-28 05:19:29.609478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.617908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.617959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:00.519 [2024-11-28 05:19:29.617971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.404 ms 00:30:00.519 [2024-11-28 05:19:29.617978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.622097] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:00.519 [2024-11-28 05:19:29.622154] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:00.519 [2024-11-28 05:19:29.622168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.622176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:00.519 [2024-11-28 05:19:29.622206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.084 ms 00:30:00.519 [2024-11-28 05:19:29.622213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.638361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.638420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:00.519 [2024-11-28 05:19:29.638433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.085 ms 00:30:00.519 [2024-11-28 05:19:29.638441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.641421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.641472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:00.519 [2024-11-28 05:19:29.641482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.921 ms 00:30:00.519 [2024-11-28 05:19:29.641490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.644496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.644549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:00.519 [2024-11-28 05:19:29.644559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.945 ms 00:30:00.519 [2024-11-28 05:19:29.644567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.644920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.644940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:00.519 [2024-11-28 05:19:29.644950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:30:00.519 [2024-11-28 05:19:29.644958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.669775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.669839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:00.519 [2024-11-28 05:19:29.669853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.798 ms 00:30:00.519 [2024-11-28 05:19:29.669861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.678358] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:00.519 [2024-11-28 05:19:29.681474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.681531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:00.519 [2024-11-28 05:19:29.681543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.560 ms 00:30:00.519 [2024-11-28 05:19:29.681557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.681636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.681648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:00.519 [2024-11-28 05:19:29.681657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:00.519 [2024-11-28 05:19:29.681673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.681741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.681751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:00.519 [2024-11-28 05:19:29.681760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:00.519 [2024-11-28 05:19:29.681771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.681797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.681806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:00.519 [2024-11-28 05:19:29.681815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:00.519 [2024-11-28 05:19:29.681823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.681865] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:00.519 [2024-11-28 05:19:29.681875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.681883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:00.519 [2024-11-28 05:19:29.681896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:00.519 [2024-11-28 05:19:29.681906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.687763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.687956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:00.519 [2024-11-28 05:19:29.687979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.838 ms 00:30:00.519 [2024-11-28 05:19:29.687988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.688076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.519 [2024-11-28 05:19:29.688088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:00.519 [2024-11-28 05:19:29.688101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:00.519 [2024-11-28 05:19:29.688110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.519 [2024-11-28 05:19:29.689329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.177 ms, result 0 00:30:01.461  [2024-11-28T05:19:32.128Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-28T05:19:33.069Z] Copying: 54/1024 [MB] (33 MBps) [2024-11-28T05:19:34.012Z] Copying: 69/1024 [MB] (14 MBps) [2024-11-28T05:19:34.955Z] Copying: 88/1024 [MB] (19 MBps) [2024-11-28T05:19:35.921Z] Copying: 101/1024 [MB] (13 MBps) [2024-11-28T05:19:36.865Z] Copying: 115/1024 [MB] (13 MBps) [2024-11-28T05:19:37.808Z] Copying: 125/1024 [MB] (10 MBps) [2024-11-28T05:19:38.749Z] Copying: 137/1024 [MB] (11 MBps) [2024-11-28T05:19:40.133Z] Copying: 147/1024 [MB] (10 MBps) [2024-11-28T05:19:40.706Z] Copying: 160/1024 [MB] (12 MBps) [2024-11-28T05:19:42.091Z] Copying: 176/1024 [MB] (16 MBps) [2024-11-28T05:19:43.032Z] Copying: 187/1024 [MB] (10 MBps) [2024-11-28T05:19:43.970Z] Copying: 201/1024 [MB] (14 MBps) [2024-11-28T05:19:44.912Z] Copying: 241/1024 [MB] (39 MBps) [2024-11-28T05:19:45.919Z] Copying: 278/1024 [MB] (36 MBps) [2024-11-28T05:19:46.859Z] Copying: 297/1024 [MB] (19 MBps) [2024-11-28T05:19:47.800Z] Copying: 317/1024 [MB] (19 MBps) [2024-11-28T05:19:48.748Z] Copying: 335/1024 [MB] (18 MBps) [2024-11-28T05:19:50.133Z] Copying: 354/1024 [MB] (18 MBps) [2024-11-28T05:19:50.705Z] Copying: 374/1024 [MB] (20 MBps) [2024-11-28T05:19:52.088Z] Copying: 396/1024 [MB] (21 MBps) [2024-11-28T05:19:53.032Z] Copying: 417/1024 [MB] (21 MBps) [2024-11-28T05:19:53.978Z] Copying: 432/1024 [MB] (15 MBps) [2024-11-28T05:19:54.919Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-28T05:19:55.863Z] Copying: 459/1024 [MB] (17 MBps) [2024-11-28T05:19:56.804Z] Copying: 471/1024 [MB] (11 MBps) [2024-11-28T05:19:57.746Z] Copying: 482/1024 [MB] (10 MBps) [2024-11-28T05:19:59.134Z] Copying: 503/1024 [MB] (21 MBps) [2024-11-28T05:19:59.705Z] Copying: 514/1024 [MB] (11 MBps) [2024-11-28T05:20:01.086Z] Copying: 537404/1048576 [kB] (10212 kBps) [2024-11-28T05:20:02.020Z] Copying: 547/1024 [MB] (23 MBps) [2024-11-28T05:20:02.964Z] Copying: 586/1024 [MB] (38 MBps) [2024-11-28T05:20:03.907Z] Copying: 602/1024 [MB] (15 MBps) [2024-11-28T05:20:04.850Z] Copying: 612/1024 [MB] (10 MBps) [2024-11-28T05:20:05.795Z] Copying: 626/1024 [MB] (14 MBps) [2024-11-28T05:20:06.740Z] Copying: 645/1024 [MB] (18 MBps) [2024-11-28T05:20:08.127Z] Copying: 660/1024 [MB] (14 MBps) [2024-11-28T05:20:09.067Z] Copying: 677/1024 [MB] (17 MBps) [2024-11-28T05:20:10.020Z] Copying: 695/1024 [MB] (17 MBps) [2024-11-28T05:20:10.965Z] Copying: 708/1024 [MB] (13 MBps) [2024-11-28T05:20:11.904Z] Copying: 721/1024 [MB] (13 MBps) [2024-11-28T05:20:12.849Z] Copying: 739/1024 [MB] (17 MBps) [2024-11-28T05:20:13.804Z] Copying: 757/1024 [MB] (17 MBps) [2024-11-28T05:20:14.789Z] Copying: 780/1024 [MB] (23 MBps) [2024-11-28T05:20:15.726Z] Copying: 801/1024 [MB] (20 MBps) [2024-11-28T05:20:17.103Z] Copying: 822/1024 [MB] (20 MBps) [2024-11-28T05:20:18.046Z] Copying: 853/1024 [MB] (31 MBps) [2024-11-28T05:20:18.991Z] Copying: 875/1024 [MB] (22 MBps) [2024-11-28T05:20:19.938Z] Copying: 886/1024 [MB] (10 MBps) [2024-11-28T05:20:20.877Z] Copying: 905/1024 [MB] (19 MBps) [2024-11-28T05:20:21.817Z] Copying: 919/1024 [MB] (13 MBps) [2024-11-28T05:20:22.754Z] Copying: 929/1024 [MB] (10 MBps) [2024-11-28T05:20:24.129Z] Copying: 943/1024 [MB] (13 MBps) [2024-11-28T05:20:25.062Z] Copying: 957/1024 [MB] (14 MBps) [2024-11-28T05:20:26.020Z] Copying: 979/1024 [MB] (21 MBps) [2024-11-28T05:20:26.958Z] Copying: 1000/1024 [MB] (20 MBps) [2024-11-28T05:20:27.220Z] Copying: 1013/1024 [MB] (13 MBps) [2024-11-28T05:20:27.220Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-28 05:20:27.161936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.936 [2024-11-28 05:20:27.161973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:57.936 [2024-11-28 05:20:27.161984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:57.936 [2024-11-28 05:20:27.161994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.936 [2024-11-28 05:20:27.162010] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:57.936 [2024-11-28 05:20:27.162400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.936 [2024-11-28 05:20:27.162420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:57.936 [2024-11-28 05:20:27.162428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:30:57.936 [2024-11-28 05:20:27.162434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.936 [2024-11-28 05:20:27.164257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.936 [2024-11-28 05:20:27.164281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:57.936 [2024-11-28 05:20:27.164289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.807 ms 00:30:57.936 [2024-11-28 05:20:27.164295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.936 [2024-11-28 05:20:27.164326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.936 [2024-11-28 05:20:27.164334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:57.936 [2024-11-28 05:20:27.164340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:57.936 [2024-11-28 05:20:27.164346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.936 [2024-11-28 05:20:27.164384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.936 [2024-11-28 05:20:27.164391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:57.936 [2024-11-28 05:20:27.164397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:57.936 [2024-11-28 05:20:27.164403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.936 [2024-11-28 05:20:27.164413] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:57.936 [2024-11-28 05:20:27.164428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:57.936 [2024-11-28 05:20:27.164695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.164998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.165004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:57.937 [2024-11-28 05:20:27.165016] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:57.937 [2024-11-28 05:20:27.165022] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bac083d5-348e-4fb4-bb51-a59d48b2a450 00:30:57.937 [2024-11-28 05:20:27.165028] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:57.937 [2024-11-28 05:20:27.165034] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:57.937 [2024-11-28 05:20:27.165041] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:57.937 [2024-11-28 05:20:27.165047] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:57.937 [2024-11-28 05:20:27.165053] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:57.937 [2024-11-28 05:20:27.165059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:57.937 [2024-11-28 05:20:27.165064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:57.937 [2024-11-28 05:20:27.165069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:57.937 [2024-11-28 05:20:27.165073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:57.937 [2024-11-28 05:20:27.165078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.937 [2024-11-28 05:20:27.165086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:57.937 [2024-11-28 05:20:27.165093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:30:57.937 [2024-11-28 05:20:27.165098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.937 [2024-11-28 05:20:27.166327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.937 [2024-11-28 05:20:27.166350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:57.937 [2024-11-28 05:20:27.166357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:30:57.937 [2024-11-28 05:20:27.166367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.937 [2024-11-28 05:20:27.166432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.937 [2024-11-28 05:20:27.166441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:57.937 [2024-11-28 05:20:27.166448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:57.937 [2024-11-28 05:20:27.166456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.937 [2024-11-28 05:20:27.170474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.938 [2024-11-28 05:20:27.170499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:57.938 [2024-11-28 05:20:27.170506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.938 [2024-11-28 05:20:27.170515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.938 [2024-11-28 05:20:27.170556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.938 [2024-11-28 05:20:27.170565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:57.938 [2024-11-28 05:20:27.170571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.938 [2024-11-28 05:20:27.170578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.938 [2024-11-28 05:20:27.170601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.938 [2024-11-28 05:20:27.170609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:57.938 [2024-11-28 05:20:27.170614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.938 [2024-11-28 05:20:27.170620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.938 [2024-11-28 05:20:27.170631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.938 [2024-11-28 05:20:27.170637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:57.938 [2024-11-28 05:20:27.170644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.938 [2024-11-28 05:20:27.170650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.938 [2024-11-28 05:20:27.177962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.938 [2024-11-28 05:20:27.177992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:57.938 [2024-11-28 05:20:27.178000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.938 [2024-11-28 05:20:27.178006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.938 [2024-11-28 05:20:27.183882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.938 [2024-11-28 05:20:27.183914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:57.938 [2024-11-28 05:20:27.183926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.938 [2024-11-28 05:20:27.183936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.938 [2024-11-28 05:20:27.183971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.938 [2024-11-28 05:20:27.183978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:57.938 [2024-11-28 05:20:27.183984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.938 [2024-11-28 05:20:27.183990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.938 [2024-11-28 05:20:27.184011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.938 [2024-11-28 05:20:27.184017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:57.938 [2024-11-28 05:20:27.184023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.938 [2024-11-28 05:20:27.184032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.938 [2024-11-28 05:20:27.184077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.938 [2024-11-28 05:20:27.184084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:57.938 [2024-11-28 05:20:27.184090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.938 [2024-11-28 05:20:27.184096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.938 [2024-11-28 05:20:27.184113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.938 [2024-11-28 05:20:27.184120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:57.938 [2024-11-28 05:20:27.184126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.938 [2024-11-28 05:20:27.184132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.938 [2024-11-28 05:20:27.184160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.938 [2024-11-28 05:20:27.184167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:57.938 [2024-11-28 05:20:27.184173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.938 [2024-11-28 05:20:27.184235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.938 [2024-11-28 05:20:27.184267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.938 [2024-11-28 05:20:27.184280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:57.938 [2024-11-28 05:20:27.184286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.938 [2024-11-28 05:20:27.184294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.938 [2024-11-28 05:20:27.184388] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 22.427 ms, result 0 00:30:58.199 00:30:58.199 00:30:58.199 05:20:27 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:58.461 [2024-11-28 05:20:27.496236] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:58.461 [2024-11-28 05:20:27.496383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95473 ] 00:30:58.461 [2024-11-28 05:20:27.642169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.461 [2024-11-28 05:20:27.670679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.723 [2024-11-28 05:20:27.786652] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:58.723 [2024-11-28 05:20:27.786747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:58.723 [2024-11-28 05:20:27.946778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.723 [2024-11-28 05:20:27.946842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:58.723 [2024-11-28 05:20:27.946857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:58.723 [2024-11-28 05:20:27.946866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.723 [2024-11-28 05:20:27.946920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.723 [2024-11-28 05:20:27.946930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:58.723 [2024-11-28 05:20:27.946939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:58.723 [2024-11-28 05:20:27.946954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.723 [2024-11-28 05:20:27.946980] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:58.723 [2024-11-28 05:20:27.947316] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:58.723 [2024-11-28 05:20:27.947346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.723 [2024-11-28 05:20:27.947359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:58.723 [2024-11-28 05:20:27.947371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:30:58.723 [2024-11-28 05:20:27.947382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.723 [2024-11-28 05:20:27.947840] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:58.723 [2024-11-28 05:20:27.947883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.723 [2024-11-28 05:20:27.947892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:58.723 [2024-11-28 05:20:27.947902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:58.723 [2024-11-28 05:20:27.947914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.723 [2024-11-28 05:20:27.947972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.723 [2024-11-28 05:20:27.947983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:58.723 [2024-11-28 05:20:27.947993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:58.723 [2024-11-28 05:20:27.948005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.723 [2024-11-28 05:20:27.948278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.723 [2024-11-28 05:20:27.948292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:58.723 [2024-11-28 05:20:27.948305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:30:58.723 [2024-11-28 05:20:27.948313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.723 [2024-11-28 05:20:27.948397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.723 [2024-11-28 05:20:27.948409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:58.723 [2024-11-28 05:20:27.948419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:58.723 [2024-11-28 05:20:27.948426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.723 [2024-11-28 05:20:27.948450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.723 [2024-11-28 05:20:27.948464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:58.723 [2024-11-28 05:20:27.948476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:58.723 [2024-11-28 05:20:27.948485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.723 [2024-11-28 05:20:27.948507] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:58.723 [2024-11-28 05:20:27.950618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.723 [2024-11-28 05:20:27.950657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:58.723 [2024-11-28 05:20:27.950667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.114 ms 00:30:58.723 [2024-11-28 05:20:27.950675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.723 [2024-11-28 05:20:27.950711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.723 [2024-11-28 05:20:27.950720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:58.723 [2024-11-28 05:20:27.950728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:58.723 [2024-11-28 05:20:27.950736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.723 [2024-11-28 05:20:27.950785] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:58.723 [2024-11-28 05:20:27.950810] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:58.723 [2024-11-28 05:20:27.950856] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:58.723 [2024-11-28 05:20:27.950873] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:58.723 [2024-11-28 05:20:27.950982] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:58.723 [2024-11-28 05:20:27.950994] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:58.723 [2024-11-28 05:20:27.951006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:58.723 [2024-11-28 05:20:27.951018] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:58.723 [2024-11-28 05:20:27.951034] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:58.723 [2024-11-28 05:20:27.951042] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:58.723 [2024-11-28 05:20:27.951051] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:58.723 [2024-11-28 05:20:27.951058] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:58.724 [2024-11-28 05:20:27.951066] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:58.724 [2024-11-28 05:20:27.951073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.724 [2024-11-28 05:20:27.951088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:58.724 [2024-11-28 05:20:27.951096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:30:58.724 [2024-11-28 05:20:27.951103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.724 [2024-11-28 05:20:27.951210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.724 [2024-11-28 05:20:27.951220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:58.724 [2024-11-28 05:20:27.951231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:30:58.724 [2024-11-28 05:20:27.951238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.724 [2024-11-28 05:20:27.951353] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:58.724 [2024-11-28 05:20:27.951367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:58.724 [2024-11-28 05:20:27.951378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:58.724 [2024-11-28 05:20:27.951390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:58.724 [2024-11-28 05:20:27.951416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:58.724 [2024-11-28 05:20:27.951433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:58.724 [2024-11-28 05:20:27.951441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:58.724 [2024-11-28 05:20:27.951460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:58.724 [2024-11-28 05:20:27.951469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:58.724 [2024-11-28 05:20:27.951476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:58.724 [2024-11-28 05:20:27.951485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:58.724 [2024-11-28 05:20:27.951493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:58.724 [2024-11-28 05:20:27.951501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:58.724 [2024-11-28 05:20:27.951516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:58.724 [2024-11-28 05:20:27.951526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:58.724 [2024-11-28 05:20:27.951541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.724 [2024-11-28 05:20:27.951556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:58.724 [2024-11-28 05:20:27.951564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.724 [2024-11-28 05:20:27.951581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:58.724 [2024-11-28 05:20:27.951588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.724 [2024-11-28 05:20:27.951603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:58.724 [2024-11-28 05:20:27.951611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.724 [2024-11-28 05:20:27.951626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:58.724 [2024-11-28 05:20:27.951633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:58.724 [2024-11-28 05:20:27.951654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:58.724 [2024-11-28 05:20:27.951664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:58.724 [2024-11-28 05:20:27.951671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:58.724 [2024-11-28 05:20:27.951678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:58.724 [2024-11-28 05:20:27.951684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:58.724 [2024-11-28 05:20:27.951691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:58.724 [2024-11-28 05:20:27.951704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:58.724 [2024-11-28 05:20:27.951710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951717] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:58.724 [2024-11-28 05:20:27.951725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:58.724 [2024-11-28 05:20:27.951734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:58.724 [2024-11-28 05:20:27.951744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.724 [2024-11-28 05:20:27.951751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:58.724 [2024-11-28 05:20:27.951758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:58.724 [2024-11-28 05:20:27.951764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:58.724 [2024-11-28 05:20:27.951773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:58.724 [2024-11-28 05:20:27.951779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:58.724 [2024-11-28 05:20:27.951786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:58.724 [2024-11-28 05:20:27.951794] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:58.724 [2024-11-28 05:20:27.951805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:58.724 [2024-11-28 05:20:27.951814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:58.724 [2024-11-28 05:20:27.951822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:58.724 [2024-11-28 05:20:27.951829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:58.724 [2024-11-28 05:20:27.951835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:58.724 [2024-11-28 05:20:27.951843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:58.724 [2024-11-28 05:20:27.951849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:58.724 [2024-11-28 05:20:27.951856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:58.724 [2024-11-28 05:20:27.951863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:58.724 [2024-11-28 05:20:27.951870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:58.724 [2024-11-28 05:20:27.951877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:58.725 [2024-11-28 05:20:27.951885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:58.725 [2024-11-28 05:20:27.951903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:58.725 [2024-11-28 05:20:27.951911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:58.725 [2024-11-28 05:20:27.951919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:58.725 [2024-11-28 05:20:27.951926] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:58.725 [2024-11-28 05:20:27.951935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:58.725 [2024-11-28 05:20:27.951943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:58.725 [2024-11-28 05:20:27.951950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:58.725 [2024-11-28 05:20:27.951959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:58.725 [2024-11-28 05:20:27.951968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:58.725 [2024-11-28 05:20:27.951976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:27.951984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:58.725 [2024-11-28 05:20:27.951992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:30:58.725 [2024-11-28 05:20:27.952003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:27.961642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:27.961694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:58.725 [2024-11-28 05:20:27.961709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.599 ms 00:30:58.725 [2024-11-28 05:20:27.961717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:27.961803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:27.961811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:58.725 [2024-11-28 05:20:27.961820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:30:58.725 [2024-11-28 05:20:27.961828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:27.981723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:27.981781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:58.725 [2024-11-28 05:20:27.981796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.838 ms 00:30:58.725 [2024-11-28 05:20:27.981805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:27.981854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:27.981865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:58.725 [2024-11-28 05:20:27.981902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:58.725 [2024-11-28 05:20:27.981912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:27.982026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:27.982044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:58.725 [2024-11-28 05:20:27.982055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:58.725 [2024-11-28 05:20:27.982064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:27.982225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:27.982254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:58.725 [2024-11-28 05:20:27.982266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:30:58.725 [2024-11-28 05:20:27.982280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:27.990461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:27.990505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:58.725 [2024-11-28 05:20:27.990522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.157 ms 00:30:58.725 [2024-11-28 05:20:27.990530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:27.990645] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:58.725 [2024-11-28 05:20:27.990658] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:58.725 [2024-11-28 05:20:27.990674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:27.990683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:58.725 [2024-11-28 05:20:27.990692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:58.725 [2024-11-28 05:20:27.990707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:28.003184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:28.003228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:58.725 [2024-11-28 05:20:28.003244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.453 ms 00:30:58.725 [2024-11-28 05:20:28.003253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:28.003387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:28.003397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:58.725 [2024-11-28 05:20:28.003406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:30:58.725 [2024-11-28 05:20:28.003417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:28.003463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:28.003478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:58.725 [2024-11-28 05:20:28.003487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:58.725 [2024-11-28 05:20:28.003500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:28.003809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:28.003834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:58.725 [2024-11-28 05:20:28.003846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:30:58.725 [2024-11-28 05:20:28.003857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.725 [2024-11-28 05:20:28.003875] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:58.725 [2024-11-28 05:20:28.003886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.725 [2024-11-28 05:20:28.003897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:58.725 [2024-11-28 05:20:28.003905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:58.725 [2024-11-28 05:20:28.003913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.988 [2024-11-28 05:20:28.013401] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:58.988 [2024-11-28 05:20:28.013559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.988 [2024-11-28 05:20:28.013570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:58.988 [2024-11-28 05:20:28.013588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.628 ms 00:30:58.988 [2024-11-28 05:20:28.013596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.988 [2024-11-28 05:20:28.015944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.988 [2024-11-28 05:20:28.015978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:58.988 [2024-11-28 05:20:28.015988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.321 ms 00:30:58.988 [2024-11-28 05:20:28.015999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.988 [2024-11-28 05:20:28.016090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.988 [2024-11-28 05:20:28.016101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:58.988 [2024-11-28 05:20:28.016110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:58.988 [2024-11-28 05:20:28.016121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.988 [2024-11-28 05:20:28.016150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.988 [2024-11-28 05:20:28.016159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:58.988 [2024-11-28 05:20:28.016168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:58.988 [2024-11-28 05:20:28.016191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.988 [2024-11-28 05:20:28.016230] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:58.988 [2024-11-28 05:20:28.016242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.988 [2024-11-28 05:20:28.016250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:58.988 [2024-11-28 05:20:28.016258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:58.988 [2024-11-28 05:20:28.016266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.988 [2024-11-28 05:20:28.022266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.988 [2024-11-28 05:20:28.022318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:58.988 [2024-11-28 05:20:28.022329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.977 ms 00:30:58.988 [2024-11-28 05:20:28.022337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.988 [2024-11-28 05:20:28.022424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.988 [2024-11-28 05:20:28.022434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:58.988 [2024-11-28 05:20:28.022444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:58.988 [2024-11-28 05:20:28.022461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.988 [2024-11-28 05:20:28.023593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 76.350 ms, result 0 00:30:59.929  [2024-11-28T05:20:30.599Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-28T05:20:31.545Z] Copying: 37/1024 [MB] (22 MBps) [2024-11-28T05:20:32.491Z] Copying: 57/1024 [MB] (20 MBps) [2024-11-28T05:20:33.434Z] Copying: 75/1024 [MB] (18 MBps) [2024-11-28T05:20:34.379Z] Copying: 94/1024 [MB] (18 MBps) [2024-11-28T05:20:35.324Z] Copying: 114/1024 [MB] (19 MBps) [2024-11-28T05:20:36.265Z] Copying: 134/1024 [MB] (19 MBps) [2024-11-28T05:20:37.211Z] Copying: 147/1024 [MB] (12 MBps) [2024-11-28T05:20:38.601Z] Copying: 163/1024 [MB] (16 MBps) [2024-11-28T05:20:39.545Z] Copying: 186/1024 [MB] (22 MBps) [2024-11-28T05:20:40.488Z] Copying: 203/1024 [MB] (17 MBps) [2024-11-28T05:20:41.430Z] Copying: 215/1024 [MB] (11 MBps) [2024-11-28T05:20:42.376Z] Copying: 227/1024 [MB] (11 MBps) [2024-11-28T05:20:43.378Z] Copying: 248/1024 [MB] (20 MBps) [2024-11-28T05:20:44.325Z] Copying: 265/1024 [MB] (17 MBps) [2024-11-28T05:20:45.269Z] Copying: 286/1024 [MB] (20 MBps) [2024-11-28T05:20:46.214Z] Copying: 303/1024 [MB] (17 MBps) [2024-11-28T05:20:47.602Z] Copying: 329/1024 [MB] (25 MBps) [2024-11-28T05:20:48.547Z] Copying: 350/1024 [MB] (21 MBps) [2024-11-28T05:20:49.494Z] Copying: 362/1024 [MB] (12 MBps) [2024-11-28T05:20:50.439Z] Copying: 378/1024 [MB] (15 MBps) [2024-11-28T05:20:51.384Z] Copying: 396/1024 [MB] (17 MBps) [2024-11-28T05:20:52.330Z] Copying: 411/1024 [MB] (15 MBps) [2024-11-28T05:20:53.272Z] Copying: 425/1024 [MB] (13 MBps) [2024-11-28T05:20:54.218Z] Copying: 437/1024 [MB] (12 MBps) [2024-11-28T05:20:55.606Z] Copying: 454/1024 [MB] (16 MBps) [2024-11-28T05:20:56.553Z] Copying: 466/1024 [MB] (12 MBps) [2024-11-28T05:20:57.498Z] Copying: 477/1024 [MB] (10 MBps) [2024-11-28T05:20:58.443Z] Copying: 491/1024 [MB] (14 MBps) [2024-11-28T05:20:59.385Z] Copying: 502/1024 [MB] (11 MBps) [2024-11-28T05:21:00.330Z] Copying: 522/1024 [MB] (20 MBps) [2024-11-28T05:21:01.275Z] Copying: 535/1024 [MB] (12 MBps) [2024-11-28T05:21:02.218Z] Copying: 548/1024 [MB] (12 MBps) [2024-11-28T05:21:03.605Z] Copying: 561/1024 [MB] (13 MBps) [2024-11-28T05:21:04.550Z] Copying: 573/1024 [MB] (11 MBps) [2024-11-28T05:21:05.495Z] Copying: 586/1024 [MB] (13 MBps) [2024-11-28T05:21:06.441Z] Copying: 600/1024 [MB] (13 MBps) [2024-11-28T05:21:07.389Z] Copying: 612/1024 [MB] (11 MBps) [2024-11-28T05:21:08.334Z] Copying: 622/1024 [MB] (10 MBps) [2024-11-28T05:21:09.280Z] Copying: 633/1024 [MB] (10 MBps) [2024-11-28T05:21:10.226Z] Copying: 644/1024 [MB] (10 MBps) [2024-11-28T05:21:11.664Z] Copying: 655/1024 [MB] (10 MBps) [2024-11-28T05:21:12.254Z] Copying: 665/1024 [MB] (10 MBps) [2024-11-28T05:21:13.641Z] Copying: 676/1024 [MB] (10 MBps) [2024-11-28T05:21:14.216Z] Copying: 687/1024 [MB] (10 MBps) [2024-11-28T05:21:15.607Z] Copying: 705/1024 [MB] (18 MBps) [2024-11-28T05:21:16.548Z] Copying: 721/1024 [MB] (15 MBps) [2024-11-28T05:21:17.490Z] Copying: 734/1024 [MB] (13 MBps) [2024-11-28T05:21:18.436Z] Copying: 750/1024 [MB] (15 MBps) [2024-11-28T05:21:19.381Z] Copying: 768/1024 [MB] (18 MBps) [2024-11-28T05:21:20.326Z] Copying: 788/1024 [MB] (19 MBps) [2024-11-28T05:21:21.270Z] Copying: 804/1024 [MB] (16 MBps) [2024-11-28T05:21:22.214Z] Copying: 822/1024 [MB] (17 MBps) [2024-11-28T05:21:23.600Z] Copying: 839/1024 [MB] (17 MBps) [2024-11-28T05:21:24.543Z] Copying: 857/1024 [MB] (17 MBps) [2024-11-28T05:21:25.484Z] Copying: 879/1024 [MB] (22 MBps) [2024-11-28T05:21:26.426Z] Copying: 894/1024 [MB] (14 MBps) [2024-11-28T05:21:27.368Z] Copying: 914/1024 [MB] (20 MBps) [2024-11-28T05:21:28.311Z] Copying: 925/1024 [MB] (10 MBps) [2024-11-28T05:21:29.257Z] Copying: 940/1024 [MB] (14 MBps) [2024-11-28T05:21:30.642Z] Copying: 954/1024 [MB] (14 MBps) [2024-11-28T05:21:31.212Z] Copying: 970/1024 [MB] (15 MBps) [2024-11-28T05:21:32.598Z] Copying: 983/1024 [MB] (13 MBps) [2024-11-28T05:21:33.542Z] Copying: 996/1024 [MB] (13 MBps) [2024-11-28T05:21:34.484Z] Copying: 1007/1024 [MB] (10 MBps) [2024-11-28T05:21:35.057Z] Copying: 1018/1024 [MB] (10 MBps) [2024-11-28T05:21:35.318Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-28 05:21:35.096214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.034 [2024-11-28 05:21:35.096315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:06.034 [2024-11-28 05:21:35.096334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:06.034 [2024-11-28 05:21:35.096345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.034 [2024-11-28 05:21:35.096380] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:06.034 [2024-11-28 05:21:35.097361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.034 [2024-11-28 05:21:35.097406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:06.034 [2024-11-28 05:21:35.097430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:32:06.034 [2024-11-28 05:21:35.097442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.034 [2024-11-28 05:21:35.097764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.034 [2024-11-28 05:21:35.097778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:06.034 [2024-11-28 05:21:35.097789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:32:06.034 [2024-11-28 05:21:35.097803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.034 [2024-11-28 05:21:35.097849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.034 [2024-11-28 05:21:35.097860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:06.034 [2024-11-28 05:21:35.097870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:06.034 [2024-11-28 05:21:35.097879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.034 [2024-11-28 05:21:35.097948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.034 [2024-11-28 05:21:35.097961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:06.034 [2024-11-28 05:21:35.097976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:06.034 [2024-11-28 05:21:35.097986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.034 [2024-11-28 05:21:35.098003] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:06.034 [2024-11-28 05:21:35.098018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:06.034 [2024-11-28 05:21:35.098866] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:06.034 [2024-11-28 05:21:35.098875] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bac083d5-348e-4fb4-bb51-a59d48b2a450 00:32:06.034 [2024-11-28 05:21:35.098884] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:06.034 [2024-11-28 05:21:35.098896] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:06.034 [2024-11-28 05:21:35.098905] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:06.034 [2024-11-28 05:21:35.098913] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:06.034 [2024-11-28 05:21:35.098928] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:06.034 [2024-11-28 05:21:35.098936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:06.034 [2024-11-28 05:21:35.098944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:06.034 [2024-11-28 05:21:35.098950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:06.034 [2024-11-28 05:21:35.098957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:06.034 [2024-11-28 05:21:35.098965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.034 [2024-11-28 05:21:35.098973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:06.034 [2024-11-28 05:21:35.098982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:32:06.034 [2024-11-28 05:21:35.098997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.034 [2024-11-28 05:21:35.101663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.034 [2024-11-28 05:21:35.101709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:06.035 [2024-11-28 05:21:35.101720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:32:06.035 [2024-11-28 05:21:35.101729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.101851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.035 [2024-11-28 05:21:35.101860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:06.035 [2024-11-28 05:21:35.101874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:32:06.035 [2024-11-28 05:21:35.101887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.111033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:06.035 [2024-11-28 05:21:35.111088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:06.035 [2024-11-28 05:21:35.111100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:06.035 [2024-11-28 05:21:35.111108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.111198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:06.035 [2024-11-28 05:21:35.111207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:06.035 [2024-11-28 05:21:35.111220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:06.035 [2024-11-28 05:21:35.111227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.111298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:06.035 [2024-11-28 05:21:35.111310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:06.035 [2024-11-28 05:21:35.111318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:06.035 [2024-11-28 05:21:35.111326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.111344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:06.035 [2024-11-28 05:21:35.111352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:06.035 [2024-11-28 05:21:35.111360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:06.035 [2024-11-28 05:21:35.111376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.128211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:06.035 [2024-11-28 05:21:35.128261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:06.035 [2024-11-28 05:21:35.128273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:06.035 [2024-11-28 05:21:35.128282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.139331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:06.035 [2024-11-28 05:21:35.139385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:06.035 [2024-11-28 05:21:35.139400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:06.035 [2024-11-28 05:21:35.139412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.139465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:06.035 [2024-11-28 05:21:35.139475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:06.035 [2024-11-28 05:21:35.139483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:06.035 [2024-11-28 05:21:35.139491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.139529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:06.035 [2024-11-28 05:21:35.139539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:06.035 [2024-11-28 05:21:35.139551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:06.035 [2024-11-28 05:21:35.139558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.139612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:06.035 [2024-11-28 05:21:35.139622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:06.035 [2024-11-28 05:21:35.139630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:06.035 [2024-11-28 05:21:35.139638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.139668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:06.035 [2024-11-28 05:21:35.139677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:06.035 [2024-11-28 05:21:35.139685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:06.035 [2024-11-28 05:21:35.139692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.139734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:06.035 [2024-11-28 05:21:35.139744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:06.035 [2024-11-28 05:21:35.139753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:06.035 [2024-11-28 05:21:35.139761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.139803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:06.035 [2024-11-28 05:21:35.139813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:06.035 [2024-11-28 05:21:35.139821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:06.035 [2024-11-28 05:21:35.139828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.035 [2024-11-28 05:21:35.139953] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 43.742 ms, result 0 00:32:06.295 00:32:06.295 00:32:06.295 05:21:35 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:08.848 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:08.848 05:21:37 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:08.848 [2024-11-28 05:21:37.637989] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:32:08.848 [2024-11-28 05:21:37.638142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96196 ] 00:32:08.848 [2024-11-28 05:21:37.785216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.848 [2024-11-28 05:21:37.814358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.848 [2024-11-28 05:21:37.925043] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:08.848 [2024-11-28 05:21:37.925129] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:08.848 [2024-11-28 05:21:38.086279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.848 [2024-11-28 05:21:38.086344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:08.848 [2024-11-28 05:21:38.086364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:08.848 [2024-11-28 05:21:38.086372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.848 [2024-11-28 05:21:38.086429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.848 [2024-11-28 05:21:38.086440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:08.848 [2024-11-28 05:21:38.086450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:08.848 [2024-11-28 05:21:38.086464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.848 [2024-11-28 05:21:38.086498] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:08.848 [2024-11-28 05:21:38.087127] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:08.848 [2024-11-28 05:21:38.087207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.848 [2024-11-28 05:21:38.087220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:08.848 [2024-11-28 05:21:38.087234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:32:08.848 [2024-11-28 05:21:38.087243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.848 [2024-11-28 05:21:38.087975] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:08.848 [2024-11-28 05:21:38.088036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.848 [2024-11-28 05:21:38.088047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:08.848 [2024-11-28 05:21:38.088063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:08.848 [2024-11-28 05:21:38.088075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.848 [2024-11-28 05:21:38.088135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.848 [2024-11-28 05:21:38.088145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:08.848 [2024-11-28 05:21:38.088154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:08.848 [2024-11-28 05:21:38.088168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.848 [2024-11-28 05:21:38.088453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.848 [2024-11-28 05:21:38.088465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:08.848 [2024-11-28 05:21:38.088477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:32:08.848 [2024-11-28 05:21:38.088486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.848 [2024-11-28 05:21:38.088571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.848 [2024-11-28 05:21:38.088593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:08.848 [2024-11-28 05:21:38.088602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:08.848 [2024-11-28 05:21:38.088611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.848 [2024-11-28 05:21:38.088635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.848 [2024-11-28 05:21:38.088645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:08.848 [2024-11-28 05:21:38.088654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:08.848 [2024-11-28 05:21:38.088662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.848 [2024-11-28 05:21:38.088685] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:08.848 [2024-11-28 05:21:38.090906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.848 [2024-11-28 05:21:38.090948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:08.848 [2024-11-28 05:21:38.090959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.224 ms 00:32:08.848 [2024-11-28 05:21:38.090966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.848 [2024-11-28 05:21:38.091005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.848 [2024-11-28 05:21:38.091013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:08.848 [2024-11-28 05:21:38.091022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:08.848 [2024-11-28 05:21:38.091030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.848 [2024-11-28 05:21:38.091091] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:08.848 [2024-11-28 05:21:38.091117] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:08.849 [2024-11-28 05:21:38.091160] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:08.849 [2024-11-28 05:21:38.091201] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:08.849 [2024-11-28 05:21:38.091307] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:08.849 [2024-11-28 05:21:38.091318] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:08.849 [2024-11-28 05:21:38.091330] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:08.849 [2024-11-28 05:21:38.091345] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:08.849 [2024-11-28 05:21:38.091357] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:08.849 [2024-11-28 05:21:38.091365] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:08.849 [2024-11-28 05:21:38.091373] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:08.849 [2024-11-28 05:21:38.091380] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:08.849 [2024-11-28 05:21:38.091387] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:08.849 [2024-11-28 05:21:38.091397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.849 [2024-11-28 05:21:38.091408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:08.849 [2024-11-28 05:21:38.091415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:32:08.849 [2024-11-28 05:21:38.091424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.849 [2024-11-28 05:21:38.091512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.849 [2024-11-28 05:21:38.091520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:08.849 [2024-11-28 05:21:38.091535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:08.849 [2024-11-28 05:21:38.091543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.849 [2024-11-28 05:21:38.091645] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:08.849 [2024-11-28 05:21:38.091666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:08.849 [2024-11-28 05:21:38.091681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:08.849 [2024-11-28 05:21:38.091691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:08.849 [2024-11-28 05:21:38.091700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:08.849 [2024-11-28 05:21:38.091708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:08.849 [2024-11-28 05:21:38.091717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:08.849 [2024-11-28 05:21:38.091726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:08.849 [2024-11-28 05:21:38.091734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:08.849 [2024-11-28 05:21:38.091742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:08.849 [2024-11-28 05:21:38.091750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:08.849 [2024-11-28 05:21:38.091761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:08.849 [2024-11-28 05:21:38.091769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:08.849 [2024-11-28 05:21:38.091778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:08.849 [2024-11-28 05:21:38.091786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:08.849 [2024-11-28 05:21:38.091794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:08.849 [2024-11-28 05:21:38.091802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:08.849 [2024-11-28 05:21:38.091810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:08.849 [2024-11-28 05:21:38.091820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:08.849 [2024-11-28 05:21:38.091828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:08.849 [2024-11-28 05:21:38.091836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:08.849 [2024-11-28 05:21:38.091843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:08.849 [2024-11-28 05:21:38.091851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:08.849 [2024-11-28 05:21:38.091860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:08.849 [2024-11-28 05:21:38.091867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:08.849 [2024-11-28 05:21:38.091875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:08.849 [2024-11-28 05:21:38.091882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:08.849 [2024-11-28 05:21:38.091889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:08.849 [2024-11-28 05:21:38.091897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:08.849 [2024-11-28 05:21:38.091905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:08.849 [2024-11-28 05:21:38.091913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:08.849 [2024-11-28 05:21:38.091921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:08.849 [2024-11-28 05:21:38.091929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:08.849 [2024-11-28 05:21:38.091936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:08.849 [2024-11-28 05:21:38.091948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:08.849 [2024-11-28 05:21:38.091956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:08.849 [2024-11-28 05:21:38.091964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:08.849 [2024-11-28 05:21:38.091971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:08.849 [2024-11-28 05:21:38.091977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:08.849 [2024-11-28 05:21:38.091983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:08.849 [2024-11-28 05:21:38.091990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:08.849 [2024-11-28 05:21:38.091996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:08.849 [2024-11-28 05:21:38.092003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:08.849 [2024-11-28 05:21:38.092012] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:08.849 [2024-11-28 05:21:38.092024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:08.849 [2024-11-28 05:21:38.092031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:08.849 [2024-11-28 05:21:38.092042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:08.849 [2024-11-28 05:21:38.092050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:08.849 [2024-11-28 05:21:38.092056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:08.849 [2024-11-28 05:21:38.092063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:08.849 [2024-11-28 05:21:38.092073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:08.849 [2024-11-28 05:21:38.092079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:08.849 [2024-11-28 05:21:38.092086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:08.849 [2024-11-28 05:21:38.092094] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:08.849 [2024-11-28 05:21:38.092104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:08.849 [2024-11-28 05:21:38.092112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:08.849 [2024-11-28 05:21:38.092120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:08.849 [2024-11-28 05:21:38.092127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:08.849 [2024-11-28 05:21:38.092134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:08.849 [2024-11-28 05:21:38.092141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:08.849 [2024-11-28 05:21:38.092148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:08.849 [2024-11-28 05:21:38.092155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:08.849 [2024-11-28 05:21:38.092162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:08.849 [2024-11-28 05:21:38.092169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:08.849 [2024-11-28 05:21:38.092203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:08.849 [2024-11-28 05:21:38.092212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:08.849 [2024-11-28 05:21:38.092227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:08.849 [2024-11-28 05:21:38.092234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:08.849 [2024-11-28 05:21:38.092241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:08.849 [2024-11-28 05:21:38.092248] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:08.849 [2024-11-28 05:21:38.092256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:08.849 [2024-11-28 05:21:38.092265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:08.849 [2024-11-28 05:21:38.092272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:08.849 [2024-11-28 05:21:38.092279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:08.849 [2024-11-28 05:21:38.092287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:08.849 [2024-11-28 05:21:38.092297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.849 [2024-11-28 05:21:38.092305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:08.849 [2024-11-28 05:21:38.092314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:32:08.849 [2024-11-28 05:21:38.092322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.850 [2024-11-28 05:21:38.102610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.850 [2024-11-28 05:21:38.102657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:08.850 [2024-11-28 05:21:38.102674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.245 ms 00:32:08.850 [2024-11-28 05:21:38.102685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.850 [2024-11-28 05:21:38.102767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.850 [2024-11-28 05:21:38.102776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:08.850 [2024-11-28 05:21:38.102786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:32:08.850 [2024-11-28 05:21:38.102797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.850 [2024-11-28 05:21:38.122471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.850 [2024-11-28 05:21:38.122529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:08.850 [2024-11-28 05:21:38.122542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.614 ms 00:32:08.850 [2024-11-28 05:21:38.122551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.850 [2024-11-28 05:21:38.122601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.850 [2024-11-28 05:21:38.122611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:08.850 [2024-11-28 05:21:38.122621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:08.850 [2024-11-28 05:21:38.122629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.850 [2024-11-28 05:21:38.122746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.850 [2024-11-28 05:21:38.122762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:08.850 [2024-11-28 05:21:38.122771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:32:08.850 [2024-11-28 05:21:38.122779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:08.850 [2024-11-28 05:21:38.122911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:08.850 [2024-11-28 05:21:38.122933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:08.850 [2024-11-28 05:21:38.122943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:32:08.850 [2024-11-28 05:21:38.122955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.112 [2024-11-28 05:21:38.130903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.112 [2024-11-28 05:21:38.130950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:09.112 [2024-11-28 05:21:38.130967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.928 ms 00:32:09.112 [2024-11-28 05:21:38.130975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.112 [2024-11-28 05:21:38.131096] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:09.112 [2024-11-28 05:21:38.131113] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:09.112 [2024-11-28 05:21:38.131124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.112 [2024-11-28 05:21:38.131133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:09.112 [2024-11-28 05:21:38.131141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:09.112 [2024-11-28 05:21:38.131151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.112 [2024-11-28 05:21:38.143474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.112 [2024-11-28 05:21:38.143517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:09.112 [2024-11-28 05:21:38.143533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.299 ms 00:32:09.112 [2024-11-28 05:21:38.143542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.112 [2024-11-28 05:21:38.143676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.112 [2024-11-28 05:21:38.143687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:09.112 [2024-11-28 05:21:38.143695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:32:09.112 [2024-11-28 05:21:38.143707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.113 [2024-11-28 05:21:38.143766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.113 [2024-11-28 05:21:38.143780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:09.113 [2024-11-28 05:21:38.143789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:09.113 [2024-11-28 05:21:38.143797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.113 [2024-11-28 05:21:38.144106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.113 [2024-11-28 05:21:38.144134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:09.113 [2024-11-28 05:21:38.144148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:32:09.113 [2024-11-28 05:21:38.144161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.113 [2024-11-28 05:21:38.144201] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:09.113 [2024-11-28 05:21:38.144213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.113 [2024-11-28 05:21:38.144224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:09.113 [2024-11-28 05:21:38.144232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:09.113 [2024-11-28 05:21:38.144240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.113 [2024-11-28 05:21:38.153559] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:09.113 [2024-11-28 05:21:38.153732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.113 [2024-11-28 05:21:38.153749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:09.113 [2024-11-28 05:21:38.153758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.474 ms 00:32:09.113 [2024-11-28 05:21:38.153766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.113 [2024-11-28 05:21:38.156123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.113 [2024-11-28 05:21:38.156158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:09.113 [2024-11-28 05:21:38.156168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:32:09.113 [2024-11-28 05:21:38.156189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.113 [2024-11-28 05:21:38.156289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.113 [2024-11-28 05:21:38.156301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:09.113 [2024-11-28 05:21:38.156310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:09.113 [2024-11-28 05:21:38.156323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.113 [2024-11-28 05:21:38.156351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.113 [2024-11-28 05:21:38.156361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:09.113 [2024-11-28 05:21:38.156369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:09.113 [2024-11-28 05:21:38.156377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.113 [2024-11-28 05:21:38.156414] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:09.113 [2024-11-28 05:21:38.156425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.113 [2024-11-28 05:21:38.156433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:09.113 [2024-11-28 05:21:38.156442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:09.113 [2024-11-28 05:21:38.156450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.113 [2024-11-28 05:21:38.163284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.113 [2024-11-28 05:21:38.163338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:09.113 [2024-11-28 05:21:38.163350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.810 ms 00:32:09.113 [2024-11-28 05:21:38.163358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.113 [2024-11-28 05:21:38.163450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.113 [2024-11-28 05:21:38.163461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:09.113 [2024-11-28 05:21:38.163470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:32:09.113 [2024-11-28 05:21:38.163482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.113 [2024-11-28 05:21:38.164698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 77.942 ms, result 0 00:32:10.060  [2024-11-28T05:21:40.290Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-28T05:21:41.328Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-28T05:21:42.274Z] Copying: 49/1024 [MB] (28 MBps) [2024-11-28T05:21:43.233Z] Copying: 67/1024 [MB] (18 MBps) [2024-11-28T05:21:44.614Z] Copying: 83/1024 [MB] (15 MBps) [2024-11-28T05:21:45.186Z] Copying: 119/1024 [MB] (36 MBps) [2024-11-28T05:21:46.570Z] Copying: 137/1024 [MB] (18 MBps) [2024-11-28T05:21:47.515Z] Copying: 154/1024 [MB] (16 MBps) [2024-11-28T05:21:48.455Z] Copying: 169/1024 [MB] (15 MBps) [2024-11-28T05:21:49.390Z] Copying: 207/1024 [MB] (37 MBps) [2024-11-28T05:21:50.335Z] Copying: 244/1024 [MB] (37 MBps) [2024-11-28T05:21:51.280Z] Copying: 275/1024 [MB] (30 MBps) [2024-11-28T05:21:52.225Z] Copying: 290/1024 [MB] (15 MBps) [2024-11-28T05:21:53.611Z] Copying: 306/1024 [MB] (15 MBps) [2024-11-28T05:21:54.177Z] Copying: 325/1024 [MB] (19 MBps) [2024-11-28T05:21:55.560Z] Copying: 374/1024 [MB] (49 MBps) [2024-11-28T05:21:56.504Z] Copying: 399/1024 [MB] (25 MBps) [2024-11-28T05:21:57.447Z] Copying: 418/1024 [MB] (18 MBps) [2024-11-28T05:21:58.391Z] Copying: 435/1024 [MB] (16 MBps) [2024-11-28T05:21:59.337Z] Copying: 450/1024 [MB] (15 MBps) [2024-11-28T05:22:00.281Z] Copying: 466/1024 [MB] (16 MBps) [2024-11-28T05:22:01.226Z] Copying: 481/1024 [MB] (15 MBps) [2024-11-28T05:22:02.615Z] Copying: 492/1024 [MB] (10 MBps) [2024-11-28T05:22:03.188Z] Copying: 505/1024 [MB] (13 MBps) [2024-11-28T05:22:04.575Z] Copying: 523/1024 [MB] (17 MBps) [2024-11-28T05:22:05.520Z] Copying: 533/1024 [MB] (10 MBps) [2024-11-28T05:22:06.465Z] Copying: 543/1024 [MB] (10 MBps) [2024-11-28T05:22:07.409Z] Copying: 560/1024 [MB] (16 MBps) [2024-11-28T05:22:08.353Z] Copying: 572/1024 [MB] (12 MBps) [2024-11-28T05:22:09.298Z] Copying: 585/1024 [MB] (12 MBps) [2024-11-28T05:22:10.306Z] Copying: 595/1024 [MB] (10 MBps) [2024-11-28T05:22:11.253Z] Copying: 619680/1048576 [kB] (10144 kBps) [2024-11-28T05:22:12.195Z] Copying: 615/1024 [MB] (10 MBps) [2024-11-28T05:22:13.580Z] Copying: 626/1024 [MB] (10 MBps) [2024-11-28T05:22:14.525Z] Copying: 636/1024 [MB] (10 MBps) [2024-11-28T05:22:15.460Z] Copying: 646/1024 [MB] (10 MBps) [2024-11-28T05:22:16.405Z] Copying: 675/1024 [MB] (28 MBps) [2024-11-28T05:22:17.350Z] Copying: 691/1024 [MB] (15 MBps) [2024-11-28T05:22:18.295Z] Copying: 702/1024 [MB] (11 MBps) [2024-11-28T05:22:19.241Z] Copying: 712/1024 [MB] (10 MBps) [2024-11-28T05:22:20.180Z] Copying: 723/1024 [MB] (10 MBps) [2024-11-28T05:22:21.569Z] Copying: 743/1024 [MB] (20 MBps) [2024-11-28T05:22:22.508Z] Copying: 755/1024 [MB] (11 MBps) [2024-11-28T05:22:23.441Z] Copying: 768/1024 [MB] (13 MBps) [2024-11-28T05:22:24.386Z] Copying: 797/1024 [MB] (28 MBps) [2024-11-28T05:22:25.331Z] Copying: 816/1024 [MB] (18 MBps) [2024-11-28T05:22:26.273Z] Copying: 828/1024 [MB] (12 MBps) [2024-11-28T05:22:27.218Z] Copying: 843/1024 [MB] (15 MBps) [2024-11-28T05:22:28.606Z] Copying: 856/1024 [MB] (12 MBps) [2024-11-28T05:22:29.178Z] Copying: 866/1024 [MB] (10 MBps) [2024-11-28T05:22:30.553Z] Copying: 877/1024 [MB] (10 MBps) [2024-11-28T05:22:31.486Z] Copying: 900/1024 [MB] (23 MBps) [2024-11-28T05:22:32.422Z] Copying: 923/1024 [MB] (22 MBps) [2024-11-28T05:22:33.365Z] Copying: 949/1024 [MB] (25 MBps) [2024-11-28T05:22:34.309Z] Copying: 966/1024 [MB] (17 MBps) [2024-11-28T05:22:35.252Z] Copying: 987/1024 [MB] (21 MBps) [2024-11-28T05:22:36.194Z] Copying: 998/1024 [MB] (10 MBps) [2024-11-28T05:22:37.578Z] Copying: 1014/1024 [MB] (16 MBps) [2024-11-28T05:22:38.179Z] Copying: 1047868/1048576 [kB] (9164 kBps) [2024-11-28T05:22:38.179Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-28 05:22:37.891065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.895 [2024-11-28 05:22:37.891122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:08.895 [2024-11-28 05:22:37.891137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:08.895 [2024-11-28 05:22:37.891145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.895 [2024-11-28 05:22:37.893226] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:08.895 [2024-11-28 05:22:37.895387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.895 [2024-11-28 05:22:37.895415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:08.895 [2024-11-28 05:22:37.895425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.122 ms 00:33:08.895 [2024-11-28 05:22:37.895433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.895 [2024-11-28 05:22:37.907224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.895 [2024-11-28 05:22:37.907253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:08.895 [2024-11-28 05:22:37.907263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.090 ms 00:33:08.895 [2024-11-28 05:22:37.907277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.895 [2024-11-28 05:22:37.907310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.895 [2024-11-28 05:22:37.907319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:08.895 [2024-11-28 05:22:37.907334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:08.895 [2024-11-28 05:22:37.907342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.895 [2024-11-28 05:22:37.907387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.895 [2024-11-28 05:22:37.907398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:08.895 [2024-11-28 05:22:37.907407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:33:08.895 [2024-11-28 05:22:37.907414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.895 [2024-11-28 05:22:37.907426] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:08.895 [2024-11-28 05:22:37.907438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:33:08.895 [2024-11-28 05:22:37.907447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:08.895 [2024-11-28 05:22:37.907892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.907997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:08.896 [2024-11-28 05:22:37.908204] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:08.896 [2024-11-28 05:22:37.908215] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bac083d5-348e-4fb4-bb51-a59d48b2a450 00:33:08.896 [2024-11-28 05:22:37.908222] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:33:08.896 [2024-11-28 05:22:37.908229] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129056 00:33:08.896 [2024-11-28 05:22:37.908239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:33:08.896 [2024-11-28 05:22:37.908247] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:33:08.896 [2024-11-28 05:22:37.908257] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:08.896 [2024-11-28 05:22:37.908265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:08.896 [2024-11-28 05:22:37.908272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:08.896 [2024-11-28 05:22:37.908278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:08.896 [2024-11-28 05:22:37.908285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:08.896 [2024-11-28 05:22:37.908291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.896 [2024-11-28 05:22:37.908299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:08.896 [2024-11-28 05:22:37.908307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:33:08.896 [2024-11-28 05:22:37.908315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.909864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.896 [2024-11-28 05:22:37.909890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:08.896 [2024-11-28 05:22:37.909904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:33:08.896 [2024-11-28 05:22:37.909912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.909994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.896 [2024-11-28 05:22:37.910002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:08.896 [2024-11-28 05:22:37.910009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:33:08.896 [2024-11-28 05:22:37.910016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.915083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.896 [2024-11-28 05:22:37.915115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:08.896 [2024-11-28 05:22:37.915125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.896 [2024-11-28 05:22:37.915132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.915196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.896 [2024-11-28 05:22:37.915204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:08.896 [2024-11-28 05:22:37.915212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.896 [2024-11-28 05:22:37.915219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.915262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.896 [2024-11-28 05:22:37.915271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:08.896 [2024-11-28 05:22:37.915281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.896 [2024-11-28 05:22:37.915288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.915304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.896 [2024-11-28 05:22:37.915312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:08.896 [2024-11-28 05:22:37.915320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.896 [2024-11-28 05:22:37.915327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.924500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.896 [2024-11-28 05:22:37.924539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:08.896 [2024-11-28 05:22:37.924548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.896 [2024-11-28 05:22:37.924556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.932562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.896 [2024-11-28 05:22:37.932597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:08.896 [2024-11-28 05:22:37.932608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.896 [2024-11-28 05:22:37.932616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.932640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.896 [2024-11-28 05:22:37.932648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:08.896 [2024-11-28 05:22:37.932655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.896 [2024-11-28 05:22:37.932668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.932711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.896 [2024-11-28 05:22:37.932720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:08.896 [2024-11-28 05:22:37.932728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.896 [2024-11-28 05:22:37.932735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.932788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.896 [2024-11-28 05:22:37.932798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:08.896 [2024-11-28 05:22:37.932806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.896 [2024-11-28 05:22:37.932817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.932839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.896 [2024-11-28 05:22:37.932847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:08.896 [2024-11-28 05:22:37.932855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.896 [2024-11-28 05:22:37.932866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.896 [2024-11-28 05:22:37.932903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.897 [2024-11-28 05:22:37.932912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:08.897 [2024-11-28 05:22:37.932920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.897 [2024-11-28 05:22:37.932927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.897 [2024-11-28 05:22:37.932969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.897 [2024-11-28 05:22:37.932978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:08.897 [2024-11-28 05:22:37.932986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.897 [2024-11-28 05:22:37.932993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.897 [2024-11-28 05:22:37.933109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 43.282 ms, result 0 00:33:09.859 00:33:09.859 00:33:09.859 05:22:39 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:09.859 [2024-11-28 05:22:39.084203] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:33:09.859 [2024-11-28 05:22:39.084346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96809 ] 00:33:10.122 [2024-11-28 05:22:39.223747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.122 [2024-11-28 05:22:39.251829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.122 [2024-11-28 05:22:39.367636] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:10.122 [2024-11-28 05:22:39.367718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:10.386 [2024-11-28 05:22:39.528911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.386 [2024-11-28 05:22:39.528967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:10.386 [2024-11-28 05:22:39.528986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:10.386 [2024-11-28 05:22:39.528995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.386 [2024-11-28 05:22:39.529047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.386 [2024-11-28 05:22:39.529062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:10.386 [2024-11-28 05:22:39.529071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:10.386 [2024-11-28 05:22:39.529085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.386 [2024-11-28 05:22:39.529112] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:10.386 [2024-11-28 05:22:39.529393] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:10.386 [2024-11-28 05:22:39.529411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.386 [2024-11-28 05:22:39.529438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:10.386 [2024-11-28 05:22:39.529450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:33:10.386 [2024-11-28 05:22:39.529461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.386 [2024-11-28 05:22:39.529734] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:10.386 [2024-11-28 05:22:39.529759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.386 [2024-11-28 05:22:39.529768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:10.386 [2024-11-28 05:22:39.529777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:10.386 [2024-11-28 05:22:39.529793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.386 [2024-11-28 05:22:39.529854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.386 [2024-11-28 05:22:39.529864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:10.386 [2024-11-28 05:22:39.529872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:10.386 [2024-11-28 05:22:39.529880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.386 [2024-11-28 05:22:39.530160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.386 [2024-11-28 05:22:39.530172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:10.386 [2024-11-28 05:22:39.530203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:33:10.386 [2024-11-28 05:22:39.530211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.386 [2024-11-28 05:22:39.530297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.386 [2024-11-28 05:22:39.530311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:10.386 [2024-11-28 05:22:39.530320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:33:10.386 [2024-11-28 05:22:39.530327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.386 [2024-11-28 05:22:39.530350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.386 [2024-11-28 05:22:39.530360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:10.386 [2024-11-28 05:22:39.530368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:10.386 [2024-11-28 05:22:39.530380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.386 [2024-11-28 05:22:39.530400] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:10.386 [2024-11-28 05:22:39.532460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.386 [2024-11-28 05:22:39.532498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:10.386 [2024-11-28 05:22:39.532509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:33:10.386 [2024-11-28 05:22:39.532524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.386 [2024-11-28 05:22:39.532559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.386 [2024-11-28 05:22:39.532569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:10.386 [2024-11-28 05:22:39.532578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:10.386 [2024-11-28 05:22:39.532586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.386 [2024-11-28 05:22:39.532620] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:10.386 [2024-11-28 05:22:39.532649] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:10.386 [2024-11-28 05:22:39.532691] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:10.386 [2024-11-28 05:22:39.532708] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:10.386 [2024-11-28 05:22:39.532813] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:10.386 [2024-11-28 05:22:39.532831] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:10.386 [2024-11-28 05:22:39.532844] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:10.386 [2024-11-28 05:22:39.532855] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:10.386 [2024-11-28 05:22:39.532868] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:10.386 [2024-11-28 05:22:39.532877] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:10.386 [2024-11-28 05:22:39.532886] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:10.386 [2024-11-28 05:22:39.532897] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:10.386 [2024-11-28 05:22:39.532906] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:10.386 [2024-11-28 05:22:39.532914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.386 [2024-11-28 05:22:39.532921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:10.386 [2024-11-28 05:22:39.532929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:33:10.386 [2024-11-28 05:22:39.532940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.386 [2024-11-28 05:22:39.533023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.386 [2024-11-28 05:22:39.533037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:10.386 [2024-11-28 05:22:39.533047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:10.386 [2024-11-28 05:22:39.533054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.386 [2024-11-28 05:22:39.533205] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:10.386 [2024-11-28 05:22:39.533223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:10.386 [2024-11-28 05:22:39.533236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:10.386 [2024-11-28 05:22:39.533245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.386 [2024-11-28 05:22:39.533252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:10.386 [2024-11-28 05:22:39.533261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:10.386 [2024-11-28 05:22:39.533272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:10.386 [2024-11-28 05:22:39.533279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:10.386 [2024-11-28 05:22:39.533286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:10.386 [2024-11-28 05:22:39.533293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:10.386 [2024-11-28 05:22:39.533299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:10.387 [2024-11-28 05:22:39.533306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:10.387 [2024-11-28 05:22:39.533313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:10.387 [2024-11-28 05:22:39.533319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:10.387 [2024-11-28 05:22:39.533326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:10.387 [2024-11-28 05:22:39.533333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.387 [2024-11-28 05:22:39.533340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:10.387 [2024-11-28 05:22:39.533347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:10.387 [2024-11-28 05:22:39.533353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.387 [2024-11-28 05:22:39.533360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:10.387 [2024-11-28 05:22:39.533367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:10.387 [2024-11-28 05:22:39.533378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:10.387 [2024-11-28 05:22:39.533385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:10.387 [2024-11-28 05:22:39.533392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:10.387 [2024-11-28 05:22:39.533398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:10.387 [2024-11-28 05:22:39.533404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:10.387 [2024-11-28 05:22:39.533410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:10.387 [2024-11-28 05:22:39.533417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:10.387 [2024-11-28 05:22:39.533437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:10.387 [2024-11-28 05:22:39.533444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:10.387 [2024-11-28 05:22:39.533450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:10.387 [2024-11-28 05:22:39.533457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:10.387 [2024-11-28 05:22:39.533464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:10.387 [2024-11-28 05:22:39.533471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:10.387 [2024-11-28 05:22:39.533477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:10.387 [2024-11-28 05:22:39.533484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:10.387 [2024-11-28 05:22:39.533490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:10.387 [2024-11-28 05:22:39.533499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:10.387 [2024-11-28 05:22:39.533507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:10.387 [2024-11-28 05:22:39.533514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.387 [2024-11-28 05:22:39.533522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:10.387 [2024-11-28 05:22:39.533529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:10.387 [2024-11-28 05:22:39.533535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.387 [2024-11-28 05:22:39.533542] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:10.387 [2024-11-28 05:22:39.533550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:10.387 [2024-11-28 05:22:39.533557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:10.387 [2024-11-28 05:22:39.533567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.387 [2024-11-28 05:22:39.533575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:10.387 [2024-11-28 05:22:39.533582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:10.387 [2024-11-28 05:22:39.533589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:10.387 [2024-11-28 05:22:39.533596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:10.387 [2024-11-28 05:22:39.533602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:10.387 [2024-11-28 05:22:39.533609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:10.387 [2024-11-28 05:22:39.533620] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:10.387 [2024-11-28 05:22:39.533632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:10.387 [2024-11-28 05:22:39.533641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:10.387 [2024-11-28 05:22:39.533648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:10.387 [2024-11-28 05:22:39.533655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:10.387 [2024-11-28 05:22:39.533662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:10.387 [2024-11-28 05:22:39.533669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:10.387 [2024-11-28 05:22:39.533677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:10.387 [2024-11-28 05:22:39.533684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:10.387 [2024-11-28 05:22:39.533690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:10.387 [2024-11-28 05:22:39.533697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:10.387 [2024-11-28 05:22:39.533705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:10.387 [2024-11-28 05:22:39.533713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:10.387 [2024-11-28 05:22:39.533725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:10.387 [2024-11-28 05:22:39.533732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:10.387 [2024-11-28 05:22:39.533739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:10.387 [2024-11-28 05:22:39.533749] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:10.387 [2024-11-28 05:22:39.533759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:10.387 [2024-11-28 05:22:39.533768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:10.387 [2024-11-28 05:22:39.533775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:10.387 [2024-11-28 05:22:39.533783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:10.387 [2024-11-28 05:22:39.533790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:10.387 [2024-11-28 05:22:39.533798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.387 [2024-11-28 05:22:39.533806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:10.387 [2024-11-28 05:22:39.533813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:33:10.387 [2024-11-28 05:22:39.533824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.387 [2024-11-28 05:22:39.543402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.387 [2024-11-28 05:22:39.543438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:10.387 [2024-11-28 05:22:39.543448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.535 ms 00:33:10.387 [2024-11-28 05:22:39.543456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.387 [2024-11-28 05:22:39.543541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.387 [2024-11-28 05:22:39.543550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:10.387 [2024-11-28 05:22:39.543559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:10.387 [2024-11-28 05:22:39.543567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.387 [2024-11-28 05:22:39.563595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.387 [2024-11-28 05:22:39.563651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:10.387 [2024-11-28 05:22:39.563674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.974 ms 00:33:10.387 [2024-11-28 05:22:39.563693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.387 [2024-11-28 05:22:39.563755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.387 [2024-11-28 05:22:39.563770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:10.387 [2024-11-28 05:22:39.563783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:10.387 [2024-11-28 05:22:39.563793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.387 [2024-11-28 05:22:39.563929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.387 [2024-11-28 05:22:39.563960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:10.387 [2024-11-28 05:22:39.563972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:10.387 [2024-11-28 05:22:39.563988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.387 [2024-11-28 05:22:39.564155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.387 [2024-11-28 05:22:39.564168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:10.387 [2024-11-28 05:22:39.564205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:33:10.387 [2024-11-28 05:22:39.564219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.387 [2024-11-28 05:22:39.572784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.387 [2024-11-28 05:22:39.572829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:10.387 [2024-11-28 05:22:39.572846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.532 ms 00:33:10.387 [2024-11-28 05:22:39.572856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.387 [2024-11-28 05:22:39.573003] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:10.387 [2024-11-28 05:22:39.573021] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:10.387 [2024-11-28 05:22:39.573033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.387 [2024-11-28 05:22:39.573049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:10.388 [2024-11-28 05:22:39.573060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:33:10.388 [2024-11-28 05:22:39.573079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.585555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.388 [2024-11-28 05:22:39.585589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:10.388 [2024-11-28 05:22:39.585599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.455 ms 00:33:10.388 [2024-11-28 05:22:39.585607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.585734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.388 [2024-11-28 05:22:39.585749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:10.388 [2024-11-28 05:22:39.585758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:33:10.388 [2024-11-28 05:22:39.585768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.585822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.388 [2024-11-28 05:22:39.585835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:10.388 [2024-11-28 05:22:39.585843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:10.388 [2024-11-28 05:22:39.585851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.586156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.388 [2024-11-28 05:22:39.586175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:10.388 [2024-11-28 05:22:39.586208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:33:10.388 [2024-11-28 05:22:39.586216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.586234] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:10.388 [2024-11-28 05:22:39.586247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.388 [2024-11-28 05:22:39.586264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:10.388 [2024-11-28 05:22:39.586272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:10.388 [2024-11-28 05:22:39.586289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.595426] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:10.388 [2024-11-28 05:22:39.595589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.388 [2024-11-28 05:22:39.595601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:10.388 [2024-11-28 05:22:39.595611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.282 ms 00:33:10.388 [2024-11-28 05:22:39.595618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.598212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.388 [2024-11-28 05:22:39.598241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:10.388 [2024-11-28 05:22:39.598253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.569 ms 00:33:10.388 [2024-11-28 05:22:39.598261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.598340] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:10.388 [2024-11-28 05:22:39.598922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.388 [2024-11-28 05:22:39.598940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:10.388 [2024-11-28 05:22:39.598950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:33:10.388 [2024-11-28 05:22:39.598962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.598986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.388 [2024-11-28 05:22:39.598994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:10.388 [2024-11-28 05:22:39.599002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:10.388 [2024-11-28 05:22:39.599010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.599044] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:10.388 [2024-11-28 05:22:39.599054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.388 [2024-11-28 05:22:39.599064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:10.388 [2024-11-28 05:22:39.599072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:10.388 [2024-11-28 05:22:39.599081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.605056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.388 [2024-11-28 05:22:39.605102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:10.388 [2024-11-28 05:22:39.605113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.956 ms 00:33:10.388 [2024-11-28 05:22:39.605122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.605223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.388 [2024-11-28 05:22:39.605239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:10.388 [2024-11-28 05:22:39.605250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:10.388 [2024-11-28 05:22:39.605257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.388 [2024-11-28 05:22:39.606515] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 77.150 ms, result 0 00:33:11.776  [2024-11-28T05:22:42.004Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-28T05:22:42.948Z] Copying: 42/1024 [MB] (20 MBps) [2024-11-28T05:22:43.892Z] Copying: 59/1024 [MB] (17 MBps) [2024-11-28T05:22:44.835Z] Copying: 78/1024 [MB] (19 MBps) [2024-11-28T05:22:46.221Z] Copying: 99/1024 [MB] (20 MBps) [2024-11-28T05:22:47.167Z] Copying: 120/1024 [MB] (20 MBps) [2024-11-28T05:22:48.109Z] Copying: 136/1024 [MB] (16 MBps) [2024-11-28T05:22:49.054Z] Copying: 147/1024 [MB] (10 MBps) [2024-11-28T05:22:49.994Z] Copying: 157/1024 [MB] (10 MBps) [2024-11-28T05:22:50.937Z] Copying: 176/1024 [MB] (18 MBps) [2024-11-28T05:22:51.880Z] Copying: 188/1024 [MB] (12 MBps) [2024-11-28T05:22:52.826Z] Copying: 199/1024 [MB] (11 MBps) [2024-11-28T05:22:54.215Z] Copying: 212/1024 [MB] (12 MBps) [2024-11-28T05:22:55.160Z] Copying: 225/1024 [MB] (12 MBps) [2024-11-28T05:22:56.104Z] Copying: 236/1024 [MB] (11 MBps) [2024-11-28T05:22:57.049Z] Copying: 247/1024 [MB] (11 MBps) [2024-11-28T05:22:57.995Z] Copying: 265/1024 [MB] (17 MBps) [2024-11-28T05:22:58.940Z] Copying: 276/1024 [MB] (11 MBps) [2024-11-28T05:22:59.884Z] Copying: 295/1024 [MB] (18 MBps) [2024-11-28T05:23:00.831Z] Copying: 307/1024 [MB] (12 MBps) [2024-11-28T05:23:02.220Z] Copying: 318/1024 [MB] (10 MBps) [2024-11-28T05:23:03.163Z] Copying: 329/1024 [MB] (10 MBps) [2024-11-28T05:23:04.106Z] Copying: 339/1024 [MB] (10 MBps) [2024-11-28T05:23:05.049Z] Copying: 355/1024 [MB] (15 MBps) [2024-11-28T05:23:05.990Z] Copying: 371/1024 [MB] (16 MBps) [2024-11-28T05:23:07.012Z] Copying: 392/1024 [MB] (21 MBps) [2024-11-28T05:23:07.973Z] Copying: 409/1024 [MB] (16 MBps) [2024-11-28T05:23:08.916Z] Copying: 425/1024 [MB] (15 MBps) [2024-11-28T05:23:09.860Z] Copying: 446/1024 [MB] (21 MBps) [2024-11-28T05:23:10.804Z] Copying: 466/1024 [MB] (20 MBps) [2024-11-28T05:23:12.190Z] Copying: 487/1024 [MB] (21 MBps) [2024-11-28T05:23:13.136Z] Copying: 507/1024 [MB] (19 MBps) [2024-11-28T05:23:14.082Z] Copying: 518/1024 [MB] (10 MBps) [2024-11-28T05:23:15.028Z] Copying: 535/1024 [MB] (17 MBps) [2024-11-28T05:23:15.986Z] Copying: 554/1024 [MB] (18 MBps) [2024-11-28T05:23:16.930Z] Copying: 570/1024 [MB] (16 MBps) [2024-11-28T05:23:17.875Z] Copying: 583/1024 [MB] (13 MBps) [2024-11-28T05:23:18.819Z] Copying: 600/1024 [MB] (16 MBps) [2024-11-28T05:23:20.206Z] Copying: 612/1024 [MB] (12 MBps) [2024-11-28T05:23:21.149Z] Copying: 625/1024 [MB] (13 MBps) [2024-11-28T05:23:22.092Z] Copying: 638/1024 [MB] (12 MBps) [2024-11-28T05:23:23.034Z] Copying: 650/1024 [MB] (11 MBps) [2024-11-28T05:23:23.977Z] Copying: 668/1024 [MB] (17 MBps) [2024-11-28T05:23:24.922Z] Copying: 686/1024 [MB] (18 MBps) [2024-11-28T05:23:25.864Z] Copying: 696/1024 [MB] (10 MBps) [2024-11-28T05:23:26.806Z] Copying: 713/1024 [MB] (16 MBps) [2024-11-28T05:23:28.193Z] Copying: 727/1024 [MB] (13 MBps) [2024-11-28T05:23:29.136Z] Copying: 750/1024 [MB] (23 MBps) [2024-11-28T05:23:30.080Z] Copying: 770/1024 [MB] (19 MBps) [2024-11-28T05:23:31.025Z] Copying: 783/1024 [MB] (13 MBps) [2024-11-28T05:23:32.103Z] Copying: 799/1024 [MB] (16 MBps) [2024-11-28T05:23:33.062Z] Copying: 809/1024 [MB] (10 MBps) [2024-11-28T05:23:34.008Z] Copying: 820/1024 [MB] (10 MBps) [2024-11-28T05:23:34.952Z] Copying: 830/1024 [MB] (10 MBps) [2024-11-28T05:23:35.901Z] Copying: 842/1024 [MB] (11 MBps) [2024-11-28T05:23:36.845Z] Copying: 855/1024 [MB] (12 MBps) [2024-11-28T05:23:38.231Z] Copying: 873/1024 [MB] (17 MBps) [2024-11-28T05:23:39.170Z] Copying: 889/1024 [MB] (16 MBps) [2024-11-28T05:23:40.114Z] Copying: 905/1024 [MB] (15 MBps) [2024-11-28T05:23:41.058Z] Copying: 929/1024 [MB] (23 MBps) [2024-11-28T05:23:42.000Z] Copying: 951/1024 [MB] (22 MBps) [2024-11-28T05:23:42.942Z] Copying: 971/1024 [MB] (20 MBps) [2024-11-28T05:23:43.883Z] Copying: 988/1024 [MB] (16 MBps) [2024-11-28T05:23:44.827Z] Copying: 1003/1024 [MB] (14 MBps) [2024-11-28T05:23:45.400Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-28 05:23:45.146647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.116 [2024-11-28 05:23:45.146779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:16.116 [2024-11-28 05:23:45.146814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:16.116 [2024-11-28 05:23:45.146847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.116 [2024-11-28 05:23:45.146904] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:16.116 [2024-11-28 05:23:45.147927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.116 [2024-11-28 05:23:45.148011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:16.116 [2024-11-28 05:23:45.148039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:34:16.116 [2024-11-28 05:23:45.148069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.116 [2024-11-28 05:23:45.148700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.116 [2024-11-28 05:23:45.148743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:16.116 [2024-11-28 05:23:45.148766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:34:16.116 [2024-11-28 05:23:45.148786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.116 [2024-11-28 05:23:45.148874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.116 [2024-11-28 05:23:45.148899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:16.116 [2024-11-28 05:23:45.148921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:16.116 [2024-11-28 05:23:45.148941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.116 [2024-11-28 05:23:45.149071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.116 [2024-11-28 05:23:45.149101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:16.116 [2024-11-28 05:23:45.149123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:34:16.116 [2024-11-28 05:23:45.149142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.116 [2024-11-28 05:23:45.149203] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:16.116 [2024-11-28 05:23:45.149236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131584 / 261120 wr_cnt: 1 state: open 00:34:16.116 [2024-11-28 05:23:45.149271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.149999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:16.116 [2024-11-28 05:23:45.150334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.150998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:16.117 [2024-11-28 05:23:45.151711] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:16.117 [2024-11-28 05:23:45.151733] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bac083d5-348e-4fb4-bb51-a59d48b2a450 00:34:16.117 [2024-11-28 05:23:45.151755] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131584 00:34:16.117 [2024-11-28 05:23:45.151775] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2592 00:34:16.117 [2024-11-28 05:23:45.151796] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2560 00:34:16.117 [2024-11-28 05:23:45.151823] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:34:16.117 [2024-11-28 05:23:45.151842] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:16.117 [2024-11-28 05:23:45.151864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:16.117 [2024-11-28 05:23:45.151894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:16.117 [2024-11-28 05:23:45.151912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:16.117 [2024-11-28 05:23:45.151931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:16.117 [2024-11-28 05:23:45.151951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.117 [2024-11-28 05:23:45.151972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:16.117 [2024-11-28 05:23:45.151992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:34:16.117 [2024-11-28 05:23:45.152011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.117 [2024-11-28 05:23:45.154906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.117 [2024-11-28 05:23:45.154955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:16.117 [2024-11-28 05:23:45.154965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.857 ms 00:34:16.117 [2024-11-28 05:23:45.154974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.117 [2024-11-28 05:23:45.155103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:16.117 [2024-11-28 05:23:45.155115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:16.117 [2024-11-28 05:23:45.155124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:34:16.117 [2024-11-28 05:23:45.155131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.117 [2024-11-28 05:23:45.163831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.117 [2024-11-28 05:23:45.163883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:16.117 [2024-11-28 05:23:45.163895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.117 [2024-11-28 05:23:45.163903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.117 [2024-11-28 05:23:45.163981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.117 [2024-11-28 05:23:45.163991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:16.117 [2024-11-28 05:23:45.163999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.117 [2024-11-28 05:23:45.164008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.117 [2024-11-28 05:23:45.164077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.117 [2024-11-28 05:23:45.164094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:16.117 [2024-11-28 05:23:45.164102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.117 [2024-11-28 05:23:45.164111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.117 [2024-11-28 05:23:45.164129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.117 [2024-11-28 05:23:45.164137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:16.117 [2024-11-28 05:23:45.164146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.117 [2024-11-28 05:23:45.164153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.117 [2024-11-28 05:23:45.180016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.117 [2024-11-28 05:23:45.180074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:16.117 [2024-11-28 05:23:45.180085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.117 [2024-11-28 05:23:45.180093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.117 [2024-11-28 05:23:45.191997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.117 [2024-11-28 05:23:45.192052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:16.117 [2024-11-28 05:23:45.192064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.117 [2024-11-28 05:23:45.192073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.118 [2024-11-28 05:23:45.192139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.118 [2024-11-28 05:23:45.192150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:16.118 [2024-11-28 05:23:45.192163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.118 [2024-11-28 05:23:45.192171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.118 [2024-11-28 05:23:45.192237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.118 [2024-11-28 05:23:45.192249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:16.118 [2024-11-28 05:23:45.192258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.118 [2024-11-28 05:23:45.192266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.118 [2024-11-28 05:23:45.192324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.118 [2024-11-28 05:23:45.192337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:16.118 [2024-11-28 05:23:45.192346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.118 [2024-11-28 05:23:45.192359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.118 [2024-11-28 05:23:45.192388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.118 [2024-11-28 05:23:45.192399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:16.118 [2024-11-28 05:23:45.192409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.118 [2024-11-28 05:23:45.192417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.118 [2024-11-28 05:23:45.192461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.118 [2024-11-28 05:23:45.192478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:16.118 [2024-11-28 05:23:45.192487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.118 [2024-11-28 05:23:45.192499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.118 [2024-11-28 05:23:45.192546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:16.118 [2024-11-28 05:23:45.192558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:16.118 [2024-11-28 05:23:45.192568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:16.118 [2024-11-28 05:23:45.192576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:16.118 [2024-11-28 05:23:45.192723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 46.065 ms, result 0 00:34:16.118 00:34:16.118 00:34:16.379 05:23:45 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:18.929 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:18.929 05:23:47 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:18.929 05:23:47 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:18.929 05:23:47 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:18.929 05:23:47 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:18.929 05:23:47 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:18.929 05:23:47 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94680 00:34:18.929 05:23:47 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94680 ']' 00:34:18.929 05:23:47 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94680 00:34:18.929 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94680) - No such process 00:34:18.929 05:23:47 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 94680 is not found' 00:34:18.929 Process with pid 94680 is not found 00:34:18.929 Remove shared memory files 00:34:18.929 05:23:47 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:18.929 05:23:47 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:18.929 05:23:47 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:18.930 05:23:47 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_bac083d5-348e-4fb4-bb51-a59d48b2a450_band_md /dev/hugepages/ftl_bac083d5-348e-4fb4-bb51-a59d48b2a450_l2p_l1 /dev/hugepages/ftl_bac083d5-348e-4fb4-bb51-a59d48b2a450_l2p_l2 /dev/hugepages/ftl_bac083d5-348e-4fb4-bb51-a59d48b2a450_l2p_l2_ctx /dev/hugepages/ftl_bac083d5-348e-4fb4-bb51-a59d48b2a450_nvc_md /dev/hugepages/ftl_bac083d5-348e-4fb4-bb51-a59d48b2a450_p2l_pool /dev/hugepages/ftl_bac083d5-348e-4fb4-bb51-a59d48b2a450_sb /dev/hugepages/ftl_bac083d5-348e-4fb4-bb51-a59d48b2a450_sb_shm /dev/hugepages/ftl_bac083d5-348e-4fb4-bb51-a59d48b2a450_trim_bitmap /dev/hugepages/ftl_bac083d5-348e-4fb4-bb51-a59d48b2a450_trim_log /dev/hugepages/ftl_bac083d5-348e-4fb4-bb51-a59d48b2a450_trim_md /dev/hugepages/ftl_bac083d5-348e-4fb4-bb51-a59d48b2a450_vmap 00:34:18.930 05:23:47 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:18.930 05:23:47 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:18.930 05:23:47 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:18.930 00:34:18.930 real 4m37.392s 00:34:18.930 user 4m25.496s 00:34:18.930 sys 0m11.500s 00:34:18.930 ************************************ 00:34:18.930 END TEST ftl_restore_fast 00:34:18.930 05:23:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:18.930 05:23:47 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:18.930 ************************************ 00:34:18.930 05:23:47 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:18.930 05:23:47 ftl -- ftl/ftl.sh@14 -- # killprocess 85934 00:34:18.930 05:23:47 ftl -- common/autotest_common.sh@954 -- # '[' -z 85934 ']' 00:34:18.930 Process with pid 85934 is not found 00:34:18.930 05:23:47 ftl -- common/autotest_common.sh@958 -- # kill -0 85934 00:34:18.930 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85934) - No such process 00:34:18.930 05:23:47 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 85934 is not found' 00:34:18.930 05:23:47 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:18.930 05:23:47 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97517 00:34:18.930 05:23:47 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97517 00:34:18.930 05:23:47 ftl -- common/autotest_common.sh@835 -- # '[' -z 97517 ']' 00:34:18.930 05:23:47 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.930 05:23:47 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:18.930 05:23:47 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:18.930 05:23:47 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.930 05:23:47 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:18.930 05:23:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:18.930 [2024-11-28 05:23:47.942747] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:34:18.930 [2024-11-28 05:23:47.942900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97517 ] 00:34:18.930 [2024-11-28 05:23:48.091531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.930 [2024-11-28 05:23:48.120359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.875 05:23:48 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:19.875 05:23:48 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:19.875 05:23:48 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:19.875 nvme0n1 00:34:19.875 05:23:49 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:19.875 05:23:49 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:19.875 05:23:49 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:20.136 05:23:49 ftl -- ftl/common.sh@28 -- # stores=ef1056a9-a86d-41cc-b430-224e40720078 00:34:20.136 05:23:49 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:20.136 05:23:49 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef1056a9-a86d-41cc-b430-224e40720078 00:34:20.398 05:23:49 ftl -- ftl/ftl.sh@23 -- # killprocess 97517 00:34:20.398 05:23:49 ftl -- common/autotest_common.sh@954 -- # '[' -z 97517 ']' 00:34:20.398 05:23:49 ftl -- common/autotest_common.sh@958 -- # kill -0 97517 00:34:20.398 05:23:49 ftl -- common/autotest_common.sh@959 -- # uname 00:34:20.398 05:23:49 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:20.398 05:23:49 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97517 00:34:20.398 05:23:49 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:20.398 killing process with pid 97517 00:34:20.398 05:23:49 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:20.398 05:23:49 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97517' 00:34:20.398 05:23:49 ftl -- common/autotest_common.sh@973 -- # kill 97517 00:34:20.398 05:23:49 ftl -- common/autotest_common.sh@978 -- # wait 97517 00:34:20.969 05:23:49 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:20.969 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:20.969 Waiting for block devices as requested 00:34:20.969 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:21.230 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:21.230 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:21.491 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:26.785 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:26.785 05:23:55 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:26.785 Remove shared memory files 00:34:26.785 05:23:55 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:26.785 05:23:55 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:26.785 05:23:55 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:26.785 05:23:55 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:26.785 05:23:55 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:26.785 05:23:55 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:26.785 00:34:26.785 real 17m28.079s 00:34:26.785 user 19m16.784s 00:34:26.785 sys 1m18.220s 00:34:26.785 05:23:55 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.785 ************************************ 00:34:26.785 05:23:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:26.785 END TEST ftl 00:34:26.785 ************************************ 00:34:26.785 05:23:55 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:26.785 05:23:55 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:26.785 05:23:55 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:26.785 05:23:55 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:26.785 05:23:55 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:26.785 05:23:55 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:26.785 05:23:55 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:26.785 05:23:55 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:26.785 05:23:55 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:26.785 05:23:55 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:26.785 05:23:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:26.785 05:23:55 -- common/autotest_common.sh@10 -- # set +x 00:34:26.785 05:23:55 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:26.785 05:23:55 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:26.785 05:23:55 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:26.785 05:23:55 -- common/autotest_common.sh@10 -- # set +x 00:34:28.174 INFO: APP EXITING 00:34:28.174 INFO: killing all VMs 00:34:28.174 INFO: killing vhost app 00:34:28.174 INFO: EXIT DONE 00:34:28.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:28.698 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:28.698 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:28.698 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:28.698 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:29.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:29.532 Cleaning 00:34:29.532 Removing: /var/run/dpdk/spdk0/config 00:34:29.532 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:29.532 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:29.532 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:29.532 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:29.532 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:29.532 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:29.532 Removing: /var/run/dpdk/spdk0 00:34:29.532 Removing: /var/run/dpdk/spdk_pid68912 00:34:29.532 Removing: /var/run/dpdk/spdk_pid69076 00:34:29.532 Removing: /var/run/dpdk/spdk_pid69277 00:34:29.532 Removing: /var/run/dpdk/spdk_pid69359 00:34:29.532 Removing: /var/run/dpdk/spdk_pid69382 00:34:29.532 Removing: /var/run/dpdk/spdk_pid69495 00:34:29.532 Removing: /var/run/dpdk/spdk_pid69512 00:34:29.532 Removing: /var/run/dpdk/spdk_pid69694 00:34:29.532 Removing: /var/run/dpdk/spdk_pid69768 00:34:29.532 Removing: /var/run/dpdk/spdk_pid69847 00:34:29.532 Removing: /var/run/dpdk/spdk_pid69947 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70028 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70067 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70098 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70169 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70275 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70694 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70742 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70783 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70799 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70857 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70873 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70931 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70941 00:34:29.532 Removing: /var/run/dpdk/spdk_pid70989 00:34:29.532 Removing: /var/run/dpdk/spdk_pid71007 00:34:29.532 Removing: /var/run/dpdk/spdk_pid71043 00:34:29.532 Removing: /var/run/dpdk/spdk_pid71056 00:34:29.532 Removing: /var/run/dpdk/spdk_pid71184 00:34:29.532 Removing: /var/run/dpdk/spdk_pid71225 00:34:29.532 Removing: /var/run/dpdk/spdk_pid71303 00:34:29.532 Removing: /var/run/dpdk/spdk_pid71464 00:34:29.532 Removing: /var/run/dpdk/spdk_pid71537 00:34:29.532 Removing: /var/run/dpdk/spdk_pid71557 00:34:29.532 Removing: /var/run/dpdk/spdk_pid71973 00:34:29.532 Removing: /var/run/dpdk/spdk_pid72060 00:34:29.532 Removing: /var/run/dpdk/spdk_pid72163 00:34:29.532 Removing: /var/run/dpdk/spdk_pid72200 00:34:29.532 Removing: /var/run/dpdk/spdk_pid72225 00:34:29.532 Removing: /var/run/dpdk/spdk_pid72300 00:34:29.532 Removing: /var/run/dpdk/spdk_pid72916 00:34:29.532 Removing: /var/run/dpdk/spdk_pid72947 00:34:29.532 Removing: /var/run/dpdk/spdk_pid73403 00:34:29.532 Removing: /var/run/dpdk/spdk_pid73501 00:34:29.532 Removing: /var/run/dpdk/spdk_pid73599 00:34:29.532 Removing: /var/run/dpdk/spdk_pid73636 00:34:29.532 Removing: /var/run/dpdk/spdk_pid73661 00:34:29.794 Removing: /var/run/dpdk/spdk_pid73681 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75499 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75620 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75624 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75641 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75687 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75691 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75703 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75749 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75753 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75765 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75804 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75808 00:34:29.794 Removing: /var/run/dpdk/spdk_pid75820 00:34:29.794 Removing: /var/run/dpdk/spdk_pid77202 00:34:29.794 Removing: /var/run/dpdk/spdk_pid77289 00:34:29.794 Removing: /var/run/dpdk/spdk_pid78687 00:34:29.794 Removing: /var/run/dpdk/spdk_pid80402 00:34:29.794 Removing: /var/run/dpdk/spdk_pid80460 00:34:29.794 Removing: /var/run/dpdk/spdk_pid80529 00:34:29.794 Removing: /var/run/dpdk/spdk_pid80631 00:34:29.794 Removing: /var/run/dpdk/spdk_pid80712 00:34:29.794 Removing: /var/run/dpdk/spdk_pid80801 00:34:29.794 Removing: /var/run/dpdk/spdk_pid80854 00:34:29.795 Removing: /var/run/dpdk/spdk_pid80918 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81022 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81103 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81193 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81245 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81316 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81409 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81495 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81584 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81637 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81708 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81801 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81887 00:34:29.795 Removing: /var/run/dpdk/spdk_pid81972 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82024 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82093 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82157 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82223 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82322 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82402 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82491 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82547 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82616 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82679 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82742 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82840 00:34:29.795 Removing: /var/run/dpdk/spdk_pid82926 00:34:29.795 Removing: /var/run/dpdk/spdk_pid83059 00:34:29.795 Removing: /var/run/dpdk/spdk_pid83332 00:34:29.795 Removing: /var/run/dpdk/spdk_pid83365 00:34:29.795 Removing: /var/run/dpdk/spdk_pid83802 00:34:29.795 Removing: /var/run/dpdk/spdk_pid83977 00:34:29.795 Removing: /var/run/dpdk/spdk_pid84068 00:34:29.795 Removing: /var/run/dpdk/spdk_pid84171 00:34:29.795 Removing: /var/run/dpdk/spdk_pid84210 00:34:29.795 Removing: /var/run/dpdk/spdk_pid84230 00:34:29.795 Removing: /var/run/dpdk/spdk_pid84536 00:34:29.795 Removing: /var/run/dpdk/spdk_pid84574 00:34:29.795 Removing: /var/run/dpdk/spdk_pid84626 00:34:29.795 Removing: /var/run/dpdk/spdk_pid84991 00:34:29.795 Removing: /var/run/dpdk/spdk_pid85138 00:34:29.795 Removing: /var/run/dpdk/spdk_pid85934 00:34:29.795 Removing: /var/run/dpdk/spdk_pid86050 00:34:29.795 Removing: /var/run/dpdk/spdk_pid86204 00:34:29.795 Removing: /var/run/dpdk/spdk_pid86296 00:34:29.795 Removing: /var/run/dpdk/spdk_pid86593 00:34:29.795 Removing: /var/run/dpdk/spdk_pid86875 00:34:29.795 Removing: /var/run/dpdk/spdk_pid87216 00:34:29.795 Removing: /var/run/dpdk/spdk_pid87382 00:34:29.795 Removing: /var/run/dpdk/spdk_pid87502 00:34:29.795 Removing: /var/run/dpdk/spdk_pid87538 00:34:29.795 Removing: /var/run/dpdk/spdk_pid87721 00:34:29.795 Removing: /var/run/dpdk/spdk_pid87735 00:34:29.795 Removing: /var/run/dpdk/spdk_pid87777 00:34:29.795 Removing: /var/run/dpdk/spdk_pid88014 00:34:29.795 Removing: /var/run/dpdk/spdk_pid88226 00:34:29.795 Removing: /var/run/dpdk/spdk_pid88733 00:34:29.795 Removing: /var/run/dpdk/spdk_pid89507 00:34:29.795 Removing: /var/run/dpdk/spdk_pid90193 00:34:29.795 Removing: /var/run/dpdk/spdk_pid90958 00:34:29.795 Removing: /var/run/dpdk/spdk_pid91105 00:34:29.795 Removing: /var/run/dpdk/spdk_pid91182 00:34:29.795 Removing: /var/run/dpdk/spdk_pid91526 00:34:29.795 Removing: /var/run/dpdk/spdk_pid91578 00:34:29.795 Removing: /var/run/dpdk/spdk_pid92452 00:34:29.795 Removing: /var/run/dpdk/spdk_pid93000 00:34:29.795 Removing: /var/run/dpdk/spdk_pid93761 00:34:29.795 Removing: /var/run/dpdk/spdk_pid93872 00:34:29.795 Removing: /var/run/dpdk/spdk_pid93897 00:34:29.795 Removing: /var/run/dpdk/spdk_pid93944 00:34:29.795 Removing: /var/run/dpdk/spdk_pid93995 00:34:29.795 Removing: /var/run/dpdk/spdk_pid94047 00:34:29.795 Removing: /var/run/dpdk/spdk_pid94252 00:34:29.795 Removing: /var/run/dpdk/spdk_pid94321 00:34:29.795 Removing: /var/run/dpdk/spdk_pid94388 00:34:29.795 Removing: /var/run/dpdk/spdk_pid94439 00:34:29.795 Removing: /var/run/dpdk/spdk_pid94473 00:34:29.795 Removing: /var/run/dpdk/spdk_pid94540 00:34:29.795 Removing: /var/run/dpdk/spdk_pid94680 00:34:29.795 Removing: /var/run/dpdk/spdk_pid94888 00:34:30.056 Removing: /var/run/dpdk/spdk_pid95473 00:34:30.056 Removing: /var/run/dpdk/spdk_pid96196 00:34:30.056 Removing: /var/run/dpdk/spdk_pid96809 00:34:30.056 Removing: /var/run/dpdk/spdk_pid97517 00:34:30.056 Clean 00:34:30.056 05:23:59 -- common/autotest_common.sh@1453 -- # return 0 00:34:30.056 05:23:59 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:30.056 05:23:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:30.056 05:23:59 -- common/autotest_common.sh@10 -- # set +x 00:34:30.056 05:23:59 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:30.056 05:23:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:30.056 05:23:59 -- common/autotest_common.sh@10 -- # set +x 00:34:30.056 05:23:59 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:30.056 05:23:59 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:30.056 05:23:59 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:30.056 05:23:59 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:30.056 05:23:59 -- spdk/autotest.sh@398 -- # hostname 00:34:30.056 05:23:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:30.317 geninfo: WARNING: invalid characters removed from testname! 00:34:56.911 05:24:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:58.830 05:24:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:01.381 05:24:30 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:03.935 05:24:33 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:07.250 05:24:35 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:08.733 05:24:38 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:12.036 05:24:40 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:12.036 05:24:40 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:12.036 05:24:40 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:12.036 05:24:40 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:12.036 05:24:40 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:12.036 05:24:40 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:12.036 + [[ -n 5765 ]] 00:35:12.036 + sudo kill 5765 00:35:12.048 [Pipeline] } 00:35:12.069 [Pipeline] // timeout 00:35:12.075 [Pipeline] } 00:35:12.093 [Pipeline] // stage 00:35:12.099 [Pipeline] } 00:35:12.117 [Pipeline] // catchError 00:35:12.128 [Pipeline] stage 00:35:12.131 [Pipeline] { (Stop VM) 00:35:12.146 [Pipeline] sh 00:35:12.431 + vagrant halt 00:35:14.980 ==> default: Halting domain... 00:35:19.203 [Pipeline] sh 00:35:19.484 + vagrant destroy -f 00:35:22.032 ==> default: Removing domain... 00:35:22.620 [Pipeline] sh 00:35:22.905 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:22.916 [Pipeline] } 00:35:22.931 [Pipeline] // stage 00:35:22.936 [Pipeline] } 00:35:22.950 [Pipeline] // dir 00:35:22.956 [Pipeline] } 00:35:22.971 [Pipeline] // wrap 00:35:22.977 [Pipeline] } 00:35:22.990 [Pipeline] // catchError 00:35:23.000 [Pipeline] stage 00:35:23.003 [Pipeline] { (Epilogue) 00:35:23.016 [Pipeline] sh 00:35:23.304 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:28.593 [Pipeline] catchError 00:35:28.595 [Pipeline] { 00:35:28.609 [Pipeline] sh 00:35:28.894 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:28.895 Artifacts sizes are good 00:35:28.905 [Pipeline] } 00:35:28.919 [Pipeline] // catchError 00:35:28.930 [Pipeline] archiveArtifacts 00:35:28.937 Archiving artifacts 00:35:29.062 [Pipeline] cleanWs 00:35:29.078 [WS-CLEANUP] Deleting project workspace... 00:35:29.079 [WS-CLEANUP] Deferred wipeout is used... 00:35:29.098 [WS-CLEANUP] done 00:35:29.100 [Pipeline] } 00:35:29.116 [Pipeline] // stage 00:35:29.121 [Pipeline] } 00:35:29.145 [Pipeline] // node 00:35:29.149 [Pipeline] End of Pipeline 00:35:29.199 Finished: SUCCESS